396 Commits

Author SHA1 Message Date
xbgmsharp
57a754cdc0 Update tests versions, PostgREST 13.0.5 2025-08-27 22:39:45 +02:00
xbgmsharp
46f16fb077 Update README, add deepwiki 2025-08-24 09:43:23 +02:00
xbgmsharp
4c80d041cc Update tests versions, PostgreSQL 16.10, timescaledb 2.21.3 2025-08-18 19:36:57 +02:00
xbgmsharp
12e4baf662 Release PostgSail 0.9.3 2025-08-03 10:37:02 +02:00
xbgmsharp
faf62ed9a3 Update tests, PostgSail 0.9.3 2025-08-02 16:15:12 +02:00
xbgmsharp
40bdb9620f Update tests, PostgSail 0.9.3 2025-08-02 16:13:29 +02:00
xbgmsharp
8fe84ea80c Update migration 202507
- Update plugin upgrade message
- Update api.login, update the connected_at field to the current time
- Update monitoring_history_fn to use custom user settings for metrics
- Update cron_alerts_fn to check for alerts, filters out empty strings (""), so they are not included in the result.
- Update process_pre_logbook_fn to detect and avoid logbook we more than 1000NM in less 15h
2025-08-02 16:07:29 +02:00
xbgmsharp
54eefe582d Update migration 202505
- Update monitoring view to with true wind speed
2025-08-02 11:31:32 +02:00
xbgmsharp
d60af8c7b0 Update tests, TimescaleDB: 2.21.1 2025-08-02 10:40:43 +02:00
xbgmsharp
b505c98723 Update README 2025-08-02 10:32:14 +02:00
xbgmsharp
976ea85538 Update logbook tests 2025-07-14 08:56:48 +02:00
xbgmsharp
347af573c2 Update tests versions, timescaledb 2.21.0 2025-07-14 08:56:17 +02:00
xbgmsharp
60ba821af7 Update tests, add stays_ext unit tests 2025-06-22 10:42:33 +02:00
xbgmsharp
9759045b0a Update tests versions, PostgREST 13.0.4 2025-06-22 10:41:33 +02:00
xbgmsharp
a6c351c936 Update web UI front-end to v0.1.0-beta16 2025-06-21 13:00:39 +02:00
xbgmsharp
b7efb9636f Release 0.9.2 2025-06-15 22:22:55 +02:00
xbgmsharp
14d19a5394 Update migration 202505.
- Update metadata_ext table add REFERENCES and image_url
- Add stays_ext, new table to store vessel extended stays from user
- Update metadata_update_configuration_trigger, make sure it runs only when need.
- Create update_stays_ext_decode_base64_image_trigger_fn to decode base64 image
- Update function public.autodiscovery_config_fn and cron_process_autodiscovery_fn, hanlde no metric available
- Create api.stays_image to fetch stays image
- Update api.vessel_image to fetch boat image
- Update public.stay_active_geojson_fn function to produce a GeoJSON with the last position and stay details and anchor details
- Update api.monitoring_live, add anchor support
- Create view api.noteshistory_view, List stays and moorages notes order by stays
- Udpate public.process_stay_queue_fn, replace '0 day' stay name by 'short stay' name
- Create view api.logs_geojson_view, List logs with geojson
- Create view api.moorages_geojson_view, List moorages with geojson
- Create api.export_vessel_geojson_fn, export vessel as geojson
- Update api.export_logbook_geojson_linestring_trip_fn, add more trip properties to geojson
- Update api.export_logbook_geojson_trip_fn, add more trip properties to geojson
- Update api.export_logbooks_geojson_linestring_trips_fn, add more trip properties to geojson
- Update TABLE api.stays_ext with ROW LEVEL SECURITY
2025-06-15 22:13:53 +02:00
xbgmsharp
66b61f9d65 Update tests versions, timescaledb 2.20.3 2025-06-15 19:35:59 +02:00
xbgmsharp
b50c8f5007 Udpate REST API openAPI documentation 2025-06-09 10:19:44 +02:00
xbgmsharp
a9e1990184 Update postgsail ERD 2025-06-09 10:19:03 +02:00
xbgmsharp
59b52515e3 Update tests versions, timescaledb 2.20.2, PostgREST 13.0.2
Add stays_ext, metadata_ext table
2025-06-09 10:18:46 +02:00
xbgmsharp
f528456c08 Update migration 202505.
- Create trigger to process metadata for autodiscovery_config provisioning
- Create function public.autodiscovery_config_fn, to generate autodiscovery monitoring configuration
- Create cron_process_autodiscovery_fn to process autodiscovery config provisioning
- Refactor metadata_upsert_trigger_fn with the new metadata schema, remove id and check valid mmsi.
- Update email_templates table, add new autodiscovery template
2025-05-25 15:49:05 +02:00
xbgmsharp
a76c25b19f Update cron process output, add autodiscovery 2025-05-25 15:47:37 +02:00
xbgmsharp
00d2247549 Update cron job tests. Add support for autodiscovery 2025-05-25 10:57:38 +02:00
xbgmsharp
b84ac31da1 Update grafana tests 2025-05-25 10:57:03 +02:00
xbgmsharp
63cf5d24a5 Update tests versions, postgis 3.5.3 2025-05-25 10:55:42 +02:00
xbgmsharp
fe7c1dc1e5 Update metadata table test 2025-05-20 22:03:10 +02:00
xbgmsharp
10a26942c7 Update API schema 2025-05-20 22:02:30 +02:00
xbgmsharp
a5fb08fa42 Update ERD 2025-05-20 22:02:21 +02:00
xbgmsharp
25c74fd75a Update migration 202505
- Update metadata table coluonm type MMSI to text
- Update metadata upsert trigger, to ignore non-valid mmsi
- Update api.monitoring_live, fix invalid path tipo
- Update public.logbook_update_metrics_timebucket_fn, fix invalid path tipo and query
- Update public.logbook_update_metrics_fn, fix invalid path tipo
- Update public.logbook_update_metrics_short_fn, fix invalid path tipo
2025-05-20 21:58:00 +02:00
xbgmsharp
fa45782553 Update migration query order 2025-05-19 23:32:02 +02:00
xbgmsharp
e237391e8a Update API doc 2025-05-19 23:20:01 +02:00
xbgmsharp
dcf4eaca9b Add migration 202505
- Update metadata table, add IP address column, remove id column, update vessel_id default
- Add metadata_ext, new table to store vessel extended metadata from user
- Cleanup trigger on api schema, Move trigger on public schema
- Create trigger to update polar_updated_at and image_updated_at accordingly
- Create update_metadata_ext_decode_base64_image_trigger_fn to decode base64 image
- Refactor metadata_upsert_trigger_fn with the new metadata schema, remove id.
- Update metadata_grafana_trigger_fn with the new metadata schema, remove id.
- Create api.vessel_image to fetch vessel image
- Create api.vessel_extended_fn() to expose extended vessel details
- Update api.vessel_details_fn to use configuration metadata
- Update api.vessel_fn to use metadata_ext
- Update public.stay_active_geojson_fn function to produce a GeoJSON with the last position and stay details
- Update monitoring view to support live moorage in GeoJSON
- Update public.overpass_py_fn to check for seamark with name
- Update api.export_logbooks_geojson_linestring_trips_fn, add extra, _to_moorage_id, _from_moorage_id metadata
- Update api.monitoring_live, add live tracking view, Add support 6h outside barometer
- Update public.logbook_update_metrics_short_fn, aggregate more metrics and use user configuration
- Update public.logbook_update_metrics_fn, aggregate more metrics and use user configuration
- Update public.logbook_update_metrics_timebucket_fn, aggregate more metrics and use user configuration
- Update public.process_logbook_queue_fn to use new mobilitydb metrics
- Remove unnecessary functions
- Add missing comments on function
- Update public.cron_process_monitor_online_fn, refactor of metadata
- Update public.cron_process_monitor_offline_fn, Refactor metadata
- Update public.cron_process_grafana_fn, Refactor metadata
- Update permissions and RLS
- Update cron_process_skplugin_upgrade_fn, update check for signalk plugin version
2025-05-19 23:19:04 +02:00
xbgmsharp
ade15f538d Add more property in geojson linstring 2025-05-17 18:42:01 +02:00
xbgmsharp
f2cf604dab Add more property in geojson linstring 2025-05-17 17:22:51 +02:00
xbgmsharp
ad2e95bfa8 Update metadata tests, Refactor metadata table, Add metadata_ext tests 2025-05-17 17:22:24 +02:00
xbgmsharp
02130a9e4f Update grafana tests, refactor metadata table 2025-05-17 17:22:05 +02:00
xbgmsharp
29fa3863eb Update ERD, refactor metadata table 2025-05-17 17:20:46 +02:00
xbgmsharp
7cd06fced4 Update tests versions, timescaledb 2.20.0 2025-05-17 17:20:03 +02:00
xbgmsharp
564b85f58c Update postgsail ERD 2025-05-11 14:41:10 +02:00
xbgmsharp
da317dce87 Update tests, trying to improve overpass-api results 2025-05-11 14:30:38 +02:00
xbgmsharp
08ee757fa5 Update tests versions, PostgreSQL 16.9, PostgREST 12.3.0 2025-05-11 14:29:18 +02:00
xbgmsharp
76ade18d6b Update API doc 2025-05-11 14:27:03 +02:00
xbgmsharp
e6852a43f1 Update README 2025-05-11 14:26:49 +02:00
xbgmsharp
13f8240838 Release 0.9.1-202504 2025-05-04 22:41:27 +02:00
xbgmsharp
b7fe6a27b2 Update tests versions, PostgREST 12.2.12 2025-05-04 22:41:13 +02:00
xbgmsharp
29cc40f6de Update SQL comments 2025-05-04 22:32:42 +02:00
xbgmsharp
861e61d378 Update sql tests, name change in OSM data 2025-05-04 22:27:15 +02:00
xbgmsharp
684f34644f Update tests, improve and add more test 2025-05-04 22:26:37 +02:00
xbgmsharp
6ad9980cd2 Update tests, Update api.eventlogs_view skip the new_stay 2025-05-04 22:25:48 +02:00
xbgmsharp
7f5974efe2 Update tests, Remove deprecated client_id 2025-05-04 22:25:08 +02:00
xbgmsharp
f4b65d3156 Update README.md 2025-05-04 22:23:41 +02:00
xbgmsharp
d8ef8b8958 Add migration 202504
- Install and update TimescaleDB Toolkit
- Remove deprecated client_id
- Remove index from logbook columns
- Remove deprecated column from api.logbook
- Add new mobilityDB support with comments
- Update api.export_logbook_geojson_linestring_trip_fn, add more metadata properties
- Update public.check_jwt, Make new mobilitydb export geojson function anonymous access
- Create api.monitoring_upsert_fn, the function that update api.metadata monitoring configuration
- Update cron_process_skplugin_upgrade_fn, update check for signalk plugin version
- Update api.find_log_from_moorage_fn using the mobilitydb trajectory
- Update api.find_log_to_moorage_fn using the mobilitydb trajectory
- Update api.eventlogs_view to fetch the events logs backwards and skip the new_stay
- Update api.stats_logs_fn, ensure the trip is completed
- Update api.stats_fn, ensure the trip is completed
- Update moorage_delete_trigger_fn, When morrage is deleted, delete process_queue references as well.
- Create public.logbook_delete_trigger_fn, When logbook is deleted, logbook_ext need to deleted as well.
- Update metadata table, mark client_id as deprecated
- Update public.process_lat_lon_fn, Add new moorage refrence in public.process_queue for event logs review
- Add Description on missing trigger
- Update public.logbook_active_geojson_fn, fix log_gis_line as there is no end time yet
- Create public.stay_active_geojson_fn function to produce a GeoJSON with the last position and stay details
- Update monitoring view to support live moorage in GeoJSON
- Add api.monitoring_live view, the live tracking view
- Refresh permissions
2025-05-04 22:23:00 +02:00
xbgmsharp
686ac7498b Update web UI front-end to v0.1.0-beta15 2025-05-04 22:17:20 +02:00
xbgmsharp
d4c4347a4c Update web UI front-end to v0.1.0-beta14 2025-04-26 10:20:06 +02:00
xbgmsharp
4aacae3913 Update tests versions, PostgREST 12.2.11 2025-04-26 10:06:48 +02:00
xbgmsharp
06fd834441 Update metadata tests, ensure a reuslt for api.metadata.configuration base on update_at value 2025-04-21 23:42:12 +02:00
xbgmsharp
86bd4b5843 Update SQL tests, update event logs view revert b6fef6358a 2025-04-21 23:40:54 +02:00
xbgmsharp
111d7d36db Update tests versions, timescaledb_toolkit 1.21.0, timescaledb 2.19.3, PostgREST 12.2.10 2025-04-21 22:32:00 +02:00
xbgmsharp
b6fef6358a Update SQL tets, update event logs view 2025-04-13 08:35:06 +02:00
xbgmsharp
4aecea7532 Update tests versions, timescaledb 2.19.2 2025-04-13 08:34:28 +02:00
xbgmsharp
c8908748f7 Update Update ERD
- update logbook metrics
2025-04-06 22:36:40 +02:00
xbgmsharp
0e812c0939 Update mobilityDB unit sql tests, new geojson properties 2025-04-06 09:26:55 +02:00
xbgmsharp
395b7cfad7 Update sql unit test, Improve anymous test 2025-04-06 00:00:26 +02:00
xbgmsharp
14e8c8363c Update OpenAPI 2025-04-05 23:59:52 +02:00
xbgmsharp
448124f01b Update Update ERD
- Deprecated client_id
- Add new logbook metrics
2025-04-05 23:58:46 +02:00
xbgmsharp
3b466e3d93 Update tests versions, Add timescaledb_toolkit 1.19.0 2025-04-04 14:17:58 +02:00
xbgmsharp
f0ddca7d58 Update tests versions, timescaledb 2.19.1 2025-04-04 14:16:30 +02:00
xbgmsharp
7744ad4af9 Release 0.9.0-202503 2025-03-31 17:38:52 +02:00
xbgmsharp
5c70a9a453 Update cron_post_jobs output 2025-03-31 13:37:42 +02:00
xbgmsharp
6635015dbf Update mobilitydb SQL test 2025-03-31 13:37:26 +02:00
xbgmsharp
f285fcddb0 Add migration 202503
- Update metadata table, mark client_id as deprecated
- Update metadata table update configuration column type to jsonb
- Update metadata table add new column available_keys
- Update metadata_upsert_trigger_fn to metadata table
- Add metadata table tigger for update_metadata_configuration
- Update api.export_logbook_geojson_linestring_trip_fn, add metadata
- Add public.get_season, return the season based on the input date for logbook tag
- Refresh permissions
2025-03-31 13:34:39 +02:00
xbgmsharp
cabf405648 Update mobilitydb SQL test 2025-03-31 13:32:56 +02:00
xbgmsharp
b2f3372b26 Update API tests
- Deprecated client_id
- Update configuration
- Add available_keys
2025-03-31 13:32:19 +02:00
xbgmsharp
754c9bb6e7 Update tests versions, PostgSail 0.9.0, timescaledb 2.19.0 2025-03-31 13:30:12 +02:00
xbgmsharp
f9238c62dd Update OpenAPI 2025-03-31 13:29:22 +02:00
xbgmsharp
4a294674e8 Add metadata SQL unit test 2025-03-31 13:29:03 +02:00
xbgmsharp
83e92cfd6c Add metadata SQL test 2025-03-31 13:28:42 +02:00
xbgmsharp
718ca6d6ea Update grafana SQL tests
- Deprecated client_id
- Update configuration
- Add available_keys
2025-03-31 13:28:17 +02:00
xbgmsharp
a07f4f181c Update Update ERD
- Deprecated client_id
- Update configuration
- Add available_keys
2025-03-31 13:27:09 +02:00
xbgmsharp
72b06f9eb9 Update README 2025-03-14 16:01:23 +01:00
xbgmsharp
598a789d36 Update docker compose file, remove deprecated version 2025-02-27 15:19:11 +01:00
xbgmsharp
37e948cb20 Update tests versions, PostgreSQL 16.8, timescaledb 2.18.2 PostgREST 12.2.8 2025-02-27 15:17:53 +01:00
xbgmsharp
f26ece878b Update tests versions, PostGIS 3.5.2, timescaledb 2.18.0, PostgREST 12.2.6 2025-02-02 09:47:07 +01:00
xbgmsharp
9f8b43577e Update tests 2025-01-16 11:05:46 +01:00
xbgmsharp
0c76edf793 Update tests 2025-01-16 10:31:20 +01:00
xbgmsharp
0cac828347 Update front-end to v0.1.0-beta14 2025-01-16 10:30:38 +01:00
xbgmsharp
9e9189ac36 Update migration 202412:
- Update public.logbook_update_metrics_short_fn, handle corner use case
- Update public.logbook_update_metrics_fn, handle corner use case
- Update public.logbook_update_metrics_timebucket_fn, handle corner use case
2025-01-16 08:59:50 +01:00
xbgmsharp
5409f1eec9 Update migration 202412:
- Update api.export_logbooks_geojson_point_trips_fn,
- Update export_logbook_geojson_trip_fn, update geojson from trip to geojson with more properties
2025-01-09 21:19:45 +01:00
xbgmsharp
e5491ae0c9 Update migration 202412
- Update api.export_logbook_geojson_point_trip_fn, fix dynamic notes for good
2025-01-08 23:37:51 +01:00
xbgmsharp
1b57641e7d Update migration 202412
- Update api.export_logbook_geojson_point_trip_fn, fix dynamic notes
2025-01-08 23:30:44 +01:00
xbgmsharp
aa7608e07e Update migration 202412
- Update api.stats_fn, due to reference_count and stay_duration columns removal
- Update api.stats_stays_fn, due to reference_count and stay_duration columns removal
- Update log_view with dynamic GeoJSON, change geojson export fn
- Update delete_trip_entry_fn, support additional temporal sequence columns (depth,etc...)
- Update export_logbook_geojson_trip_fn, update geojson from trip to geojson additional temporal sequence columns (depth,etc...)
- Update api.export_logbook_geojson_point_trip_fn, update geojson from trip to geojson additional temporal sequence columns (depth,etc...)
2025-01-08 23:17:06 +01:00
xbgmsharp
9575eba043 Release 0.8.1-202412 2025-01-06 09:06:45 +01:00
xbgmsharp
aa7450271d Update overpass tests 2025-01-05 18:20:03 +01:00
xbgmsharp
893a7fc46f Update tests versions, PostGIS 3.5.1 2025-01-05 18:14:24 +01:00
xbgmsharp
3a035f3519 Update migration 202412
- Remove deprecated column from api.moorages
- Update api.moorage_view, due to stay_duration column removal
- Update stats_moorages_view, due to stay_duration column removal
- Update stats_moorages_away_view, due to stay_duration column removal
2025-01-05 18:01:50 +01:00
xbgmsharp
1355629c4e Add migration 202412
- Full support for MobilityDB
- Add new mobilityDB support
- Remove deprecated column from api.logbook
- Update public.logbook_update_metrics_short_fn, aggregate more metrics
- Update public.logbook_update_metrics_fn, aggregate more metrics
- Update public.logbook_update_metrics_timebucket_fn, aggregate more metrics
- Update api.merge_logbook_fn, add support for mobility temporal type
- Update export_logbook_geojson_trip_fn, update geojson from trip to geojson
- Create api.export_logbook_geojson_point_trip_fn, transform spatiotemporal trip into a geojson with the corresponding properties
- Update public.process_lat_lon_fn remove deprecated moorages columns
- Update logbook table, add support for mobility temporal type
- Update public.badges_geom_fn remove track_geom and use mobilitydb trajectory
- Update public.process_stay_queue_fn remove calculation of stay duration and count
- Update public.badges_moorages_fn remove calculation of stay duration and count
- Update api.find_log_from_moorage_fn using the mobilitydb trajectory
- Update api.find_log_to_moorage_fn using the mobilitydb trajectory
- Update api.delete_logbook_fn to delete moorage dependency using mobilitydb
- Update public.qgis_bbox_py_fn to use mobilitydb trajectory
- Update public.qgis_bbox_trip_py_fn to use mobilitydb trajectory
2025-01-05 17:37:33 +01:00
xbgmsharp
0b8a9950e8 Update tests
- Add full support for MobilityDB
2025-01-05 17:36:48 +01:00
xbgmsharp
1c1cf70ae2 Update ERD 2025-01-05 17:35:49 +01:00
xbgmsharp
f7cf07ca99 Update OpenAPI 2025-01-05 17:35:27 +01:00
xbgmsharp
00056ec4f0 Merge pull request #14 from Marinminds/main
Update Self‐hosted-installation-guide on AWS.md
2024-12-12 14:56:50 +01:00
koenraad
717a85c3ec Update Self‐hosted-installation-guide on AWS.md
Add SQL queries
2024-12-12 12:09:50 +01:00
koenraad
609fb0a05d Update Self‐hosted-installation-guide on AWS.md
update markup
add pgadmin instructions
2024-12-12 11:40:25 +01:00
xbgmsharp
22a6b7eb65 Update docs 2024-12-12 10:07:09 +01:00
xbgmsharp
561c695f32 Update doc 2024-12-07 12:07:45 +01:00
xbgmsharp
c51059a431 Update migration 202411
Update public.logbook_update_metrics_fn, aggregate metrics by time-series to reduce size
2024-12-07 12:05:51 +01:00
xbgmsharp
27081c32f7 Release 0.8.0-202411 2024-12-06 12:54:55 +01:00
xbgmsharp
c84cfb9547 Update tests versions, PostgreSQL 16.6, timescaledb 2.17.2, mobilitydb 1.2.0 2024-12-06 12:49:53 +01:00
xbgmsharp
9be725fa24 Release 0.8.0-202411 2024-12-06 12:47:23 +01:00
xbgmsharp
40675a467e Update migration 202411
Force timestamp cast
2024-12-06 12:46:36 +01:00
xbgmsharp
f90356c2a7 Update doc 2024-12-06 12:45:36 +01:00
xbgmsharp
5f89f63223 Update documentation 2024-12-03 21:33:10 +01:00
xbgmsharp
8a9abf5340 Update documentation 2024-12-03 21:31:36 +01:00
xbgmsharp
fbf6047b46 Update docuemntation, add shelf-hosted guide.
Wiki is not PR friendly
2024-12-03 21:27:26 +01:00
xbgmsharp
7b17bbcae1 Merge pull request #12 from Marinminds/main
Add guide for self hosting postgsail on AWS
2024-12-03 21:19:07 +01:00
xbgmsharp
65455c93af Update sql unit tests badges 2024-12-03 21:10:00 +01:00
xbgmsharp
48bba3eb99 Update grafana unit tests 2024-12-03 21:09:44 +01:00
xbgmsharp
0c0071236e Update migration 202411. update comments. 2024-12-03 21:08:42 +01:00
xbgmsharp
23d3586a2c Update migration 202411
-- Add MobilityDB support
-- Update logbook tbl, add trip from mobilitydb
-- Update logbook table, add support for mobility temporal type
-- Add api.export_logbook_geojson_linestring_trip_fn, transform spatiotemporal trip into a geojson with the corresponding properties
-- Add api.export_logbook_geojson_point_trip_fn, transform spatiotemporal trip into a geojson with the corresponding properties
-- Add logbook_update_geojson_trip_fn, update geojson from trip to geojson
-- Add api.export_logbook_gpx_trip_fn, update gpx from a trip
-- Add api.export_logbook_kml_trip_fn, update kml from a trip
-- Add api.export_logbook_geojson_linestring_trip_fn, replace timelapse_fn, transform spatiotemporal trip into a geojson with the corresponding properties
-- Add export_logbooks_geojson_point_trips_fn, replace timelapse2_fn, Generate the GeoJSON from the time sequence value
-- Add export_logbook_geojson_trip_fn, update geojson from trip to geojson
-- Update api.export_logbooks_gpx_trips_fn
-- Update api.export_logbooks_kml_trips_fn
-- Add update_trip_notes_fn, add temporal sequence into a trip notes
-- Add delete_trip_entry_fn, delete temporal sequence into a trip
-- Update log_view with dynamic GeoJSON from trip
-- Update api.vessels_view
-- Update api.versions_fn(), add mobilitydb
-- Update metrics_trigger_fn, Ignore entry if new time is in the future.
-- Update Monitor offline, check metadata tbl and metrics tbl, reduce flapping
-- Update moorages_view, make arrivals&departure and total_duration computed data
-- Update moorage_view, make arrivals&departure and total_duration computed data, add total visits
-- Update moorages map, use api.moorage_view as source table
-- Update mapgl_fn, refactor function using existing function
-- Update api.export_moorages_gpx_fn, use moorage_view as source to include computed data
-- Add api.export_moorages_kml_fn, export all moorages as kml
-- Update public.process_pre_logbook_fn, update stationary detection, if we have less than 20 metrics or less than 0.5NM or less than avg 0.5knts
-- Update api.export_logbook_kml_fn
-- Allow users to update certain columns on specific TABLES on API schema
-- Refresh user_role permissions
2024-12-02 21:46:05 +01:00
xbgmsharp
07a89d1fb8 Add/Enable MobilityDB SQL unit tests 2024-12-02 21:41:47 +01:00
xbgmsharp
2a4b5dbb43 Update tests versions, PostgreSQL 17.2, timescaledb 2.17.2, mobilitydb 1.2.0 2024-12-02 21:40:49 +01:00
xbgmsharp
306b942b42 Update logbook tests, add support for MobilityDB 2024-12-02 21:39:51 +01:00
xbgmsharp
59f812c1e1 Add unit tests for MobilityDB support 2024-12-02 21:39:13 +01:00
xbgmsharp
798be66c07 Update grafana sql query tests, add support for mobilitydb 2024-12-02 21:38:40 +01:00
xbgmsharp
ac21b0219c Update cron_post_jobs tests, add support for mobilitydb, update geojson key 2024-12-02 21:37:35 +01:00
xbgmsharp
05aa73890a Update API tests, enforce all metrics are from the past 2024-12-02 21:36:25 +01:00
xbgmsharp
48d19f656a Update openapi and mermaid postgsail schema 2024-12-02 21:33:37 +01:00
koenraad
3bbb57e29e Update Self‐hosted-installation-guide on AWS.md 2024-12-01 21:53:18 +01:00
koenraad
bfc0b3756b Update Self‐hosted-installation-guide on AWS.md 2024-11-24 23:19:38 +01:00
koenraad
4936e37f8c Update Self‐hosted-installation-guide on AWS.md 2024-11-24 20:39:20 +01:00
koenraad
9071643aa3 update guide 2024-11-24 17:00:29 +01:00
koenraad
590927481e change AWS guide 2024-11-24 16:43:08 +01:00
koenraad
788d811b15 Create Self‐hosted-installation-guide on AWS.md 2024-11-24 15:49:40 +01:00
xbgmsharp
f72d6b9859 Update migration 202410
- Cleanup public.update_logbook_with_geojson_trigger_fn
2024-11-03 21:46:01 +01:00
xbgmsharp
3e30709675 Release 0.7.8-202410 2024-11-03 21:39:07 +01:00
xbgmsharp
60e0097540 Add migration 202410
- Update moorages map, export more properties (notes,reference_count) from moorages tbl
- Update mapgl_fn, update moorages map sub query to export more properties (notes,reference_count) from moorages tbl
- Update logbook_update_geojson_fn, fix corrupt linestring properties
- Add trigger to update logbook stats from user edit geojson
- Add trigger on logbook update to update metrics from track_geojson
2024-11-03 21:36:20 +01:00
xbgmsharp
5d21cb2e44 Update tests, add invalid gps entry for post processing user editing. 2024-11-03 20:37:11 +01:00
xbgmsharp
ea89c934ee Update openapi documentation 2024-11-03 20:27:56 +01:00
xbgmsharp
20e1b6ad73 Update tests versions, timescaledb 2.17.1 2024-10-31 18:16:11 +01:00
xbgmsharp
79e195c24b Update tests versions, PostGIS 3.5.0 2024-10-22 17:07:05 +02:00
xbgmsharp
156d64d936 Update front-end to v0.1.0-beta12 2024-10-14 10:17:17 +02:00
xbgmsharp
c3dccf94de Release 0.7.7-202409 2024-10-13 10:35:38 +02:00
xbgmsharp
3038821353 Release 0.7.7-202409 2024-10-13 10:31:51 +02:00
xbgmsharp
051408a307 Update migration 202409 2024-10-13 10:04:55 +02:00
xbgmsharp
2f7439d704 Add migration 202409
- Add new email template account_inactivity
- Update HTML email for new logbook
- Update stats_logs_fn, update debug
- Fix stays and moorage statistics for user by date
- Update api.stats_moorages_view, fix time_spent_at_home_port
- Add stats_fn, user statistics by date
- Add mapgl_fn, generate a geojson with all logs as linestring
- Add cron_inactivity_fn, cleanup all data for inactive users and vessels
- Add cron_deactivated_fn, delete all data for inactive users and vessels
- Remove unused and duplicate function, cron_process_no_activity_fn
2024-10-13 09:42:18 +02:00
xbgmsharp
c792bf81d9 Update tests versions, Timescaledb 2.17.0 2024-10-13 09:41:19 +02:00
xbgmsharp
5551376ce2 Update tests versions, PostgreSQL 16.4 and PostGIS 3.4.3 2024-09-29 10:44:24 +02:00
xbgmsharp
069ac31ca0 Update openapi documentation 2024-09-28 11:38:55 +02:00
xbgmsharp
d10b0cf501 Update tests with new overpass api output 2024-09-22 12:43:13 +02:00
xbgmsharp
5dda28db51 Update github workflows, docker compose upgrade 2024-09-04 13:37:20 +02:00
xbgmsharp
0a80f2e35e Release 0.7.6-202408 2024-09-04 12:58:48 +02:00
xbgmsharp
dc79ca2f28 Update unit tests for api_anonymous, return empty with http/200 instead of http/404 2024-09-04 12:40:15 +02:00
xbgmsharp
fe950b2d2a Release 0.7.6-202408 2024-09-04 12:34:23 +02:00
xbgmsharp
029e0b3fb6 Update unit tests to match public.metrics_trigger_fn 2024-09-04 12:33:46 +02:00
xbgmsharp
62854a95e0 Update frontend to v0.1.0-beta11 2024-09-04 12:32:04 +02:00
xbgmsharp
e301e6fedd - Update public.process_post_logbook_fn, Add logbook_stats 2024-09-04 12:31:25 +02:00
xbgmsharp
57cf87fbe9 Add migration 202408
- Update public.process_lat_lon_fn, Ensure name without double quote, Fix issue with when string contains quote
- Update public.qgis_bbox_trip_py_fn, Add error handling for invalid argument
- Update public.qgis_bbox_py_fn, Add error handling for invalid argument
- Update public.get_app_settings_fn, Add new qgis and video URLs settings
- Update public.process_post_logbook_fn, Remove qgis dependency, Add logbook_stats
- Update public.cron_alerts_fn, Update alert message. Add the date and format the numeric value
- Update public.metrics_trigger_fn, CleanUp DEBUG print, ensure only one "new_stay" insert.
- Update public.check_jwt, reset config value Set default settings to avoid sql shared session cache for every new HTTP request
- Update public.send_email_py_fn, Update HTML email for new logbook email
- Update deactivated email template
- Update new logbook email template
- Update role authenticator, set api role SQL connection to 30
- Update table auth.vessels, Remove CONSTRAINT unique email for vessels table, Create new index for vessels table
2024-09-04 12:17:25 +02:00
xbgmsharp
3a43e57b3c Update versions. PostgreSQL 16.4 andtimescaledb 2.16.1 2024-08-11 14:44:47 +02:00
xbgmsharp
95d283b2ac Update the mermaid postgsail shcema 2024-08-08 10:12:34 +02:00
xbgmsharp
18aba507e9 Update docuemntation, docker compose 2024-08-08 10:12:19 +02:00
xbgmsharp
6045ff46c0 Update comments 2024-08-08 10:11:41 +02:00
xbgmsharp
6e367a0e4c Updat github workflows, docker compose upgrade 2024-08-07 23:12:12 +02:00
xbgmsharp
eec149d411 Update db-tests unit tests 2024-08-07 23:09:46 +02:00
xbgmsharp
de2f9c94e8 Update openapi documentation 2024-08-07 23:00:19 +02:00
xbgmsharp
d65a0b0a54 Update versions. timescaledb 2.16.0, PostgREST 12.2.3 2024-08-07 22:15:51 +02:00
xbgmsharp
59c5142909 Add more anonymous unit tests 2024-08-07 22:14:46 +02:00
xbgmsharp
e2fe23e58d Update documentation 2024-08-07 22:14:13 +02:00
xbgmsharp
eedf5881d9 Update tests, update version 2024-07-31 12:39:29 +02:00
xbgmsharp
3327c5a813 Update docker PGREST, add new DB limit 2024-07-31 12:38:52 +02:00
xbgmsharp
e857440133 Release 0.7.5-202407 2024-07-31 11:16:38 +02:00
xbgmsharp
0a13e0a798 Add migration 202407
- Add video error notification message
- Add CRON for new video notification
- Update public.cron_alerts_fn, Fix error when stateOfCharge is null. Make stateOfCharge null value assume to be charge 1.
- Update api.export_logbooks_gpx_fn, Fix error: None of these media types are available: text/xml
- Add export logbooks as png from webserver
- Update grafana provisioning, ERROR:  KeyError: 'secureJsonFields'
- Add missing comment on function cron_process_no_activity_fn
- Update grafana role SQL connection to 30
- Create qgis schema for qgis projects
- Update video cronjob
2024-07-30 15:53:26 +02:00
xbgmsharp
8fe0513c3c Add new env for qgis and maplapse 2024-07-30 15:52:59 +02:00
xbgmsharp
b69a52eacd Add new tests for qgis and maplapse 2024-07-30 15:51:44 +02:00
xbgmsharp
129ee7dcbd Add maplapse unit tests 2024-07-30 15:51:26 +02:00
xbgmsharp
9800d83463 Add qgis sql unit tests 2024-07-30 15:51:03 +02:00
xbgmsharp
b80799aa2f Update fronend to v0.1.0-beta10 2024-07-30 15:49:19 +02:00
xbgmsharp
84f73f2281 Update documentation 2024-07-30 15:24:33 +02:00
xbgmsharp
b3d5b80731 Update fronend to v0.1.0-beta9 2024-07-19 09:58:56 +02:00
xbgmsharp
5cefe47fea Release 0.7.4-202406 2024-07-19 09:52:18 +02:00
xbgmsharp
98f45ace66 Update ERD documentation 2024-07-17 09:23:40 +02:00
xbgmsharp
ff69d6ee7a Update documentation 2024-07-16 21:40:55 +02:00
xbgmsharp
52dcdb61fd Update API documentation 2024-07-16 17:41:52 +02:00
xbgmsharp
6bbe9795fb Update the 202406 migration.
- Update grafana role SQl connection to 30
- Alter moorages table with default duration of 0. Affect views
- Update moorage_view to default with duration of 0
- Create a every 15 minutes for cron_process_new_video_fn
- Update video_ready email template
- Reduce debug on public.qgis_getmap_py_fn
- Update public.qgis_bbox_py, add more parameters to allow use case.
- Create role qgis_role with RLS
- Create role maplapse_role with RLS
- Update notification, add video link and log image link for public.send_email_py_fn, public.send_pushover_py_fn, public.send_telegram_py_fn
2024-07-16 17:00:55 +02:00
xbgmsharp
23ea9dfee7 Update tests, moorages_view and moorage_view include default duration at PT0S 2024-07-16 16:43:53 +02:00
xbgmsharp
82a0f19566 Update versions. timescaledb 2.15.3, PostgREST 12.2.2 2024-07-16 16:37:31 +02:00
xbgmsharp
a7019445e3 Bump up PostgREST 2024-06-19 21:35:43 +02:00
xbgmsharp
f97635cae1 Update flowchart to reflect the new cronjob 2024-06-09 20:23:50 +02:00
xbgmsharp
fe93b9ec11 Update README, add note on docker and postgresql 2024-06-09 20:23:18 +02:00
xbgmsharp
b5be201e65 Update versions. timescaledb 2.15.2 2024-06-09 20:20:56 +02:00
xbgmsharp
33b97898d8 Update API documentation 2024-06-09 20:20:32 +02:00
xbgmsharp
c726f6cc07 Add migration 202406
- Add video timelapse notification message. Initial implementation
- Add public.qgis_getmap_py_fn to generate and request the logbook image url to be cache on QGIS server
- Add public.qgis_bbox_py_fn to generate the logbook extent for the logbook image to access the QGIS server
- Add qgis_role user and role
- Upadate public.send_email_py_fn, Add support for HTML email for logbook with qgis image
- Update api.vessel_fn, expose vessel_id to generate logbook with qgis image url
- Add process_post_logbook_fn, QGIS is handle in post with notification
- Add cron_process_post_logbook_fn
- Update api.eventlogs_view, exclude post_logbook
- Add new cron job cron_post_logbook
2024-06-09 20:14:31 +02:00
xbgmsharp
48647e5978 Update the 202405 migration.
- Update process_logbook_queue_fn
- Add missing comment on function
- Update cron_process_grafana_fn
- Drop metadata_ip_trigger
2024-06-09 20:10:47 +02:00
xbgmsharp
1702b825c7 Update migration 202405.
Update logbook_timelapse_geojson_fn, ensure we fetch the processed logbook data
Update tbl metadata trigger, Add Ip tracking per vessel to avoid abuse
Add public.logbook_active_geojson_fn, return the current trip and last coords
Update api.monitoring_view, add live tracking and add more properties to geojson
Update public.cron_process_grafana_fn, ensure vessle name is not null when not present in signalk
2024-05-23 21:31:46 +02:00
xbgmsharp
3c4f68218f Update the 202405 migration.
- Updaete public.process_logbook_queue_fn, refactor and cleanup code
- Update public.logbook_update_geojson_fn, Add avg_wind_speed to logbook geojson, Add back truewindspeed and truewinddirection to logbook geojson
- Add public.logbook_timelapse_geojson_fn, Add properties to the geojson for timelapse purpose
2024-05-16 23:44:46 +02:00
xbgmsharp
42e85cc498 Update the 202405 migration. Fix again trip details name in the geojson 2024-05-16 21:54:03 +02:00
xbgmsharp
bd9b207d43 Update the 202405 migration. Add again trip details name in the geojson 2024-05-16 21:27:53 +02:00
xbgmsharp
c588fc676c Update migration 202405
- Update api.login add user ip to track multi-user accounts
- Update signalk plugin message
- Update cron job signalk plugin
2024-05-16 10:29:47 +02:00
xbgmsharp
e003985d6c Update tests for logbook 2024-05-16 10:02:48 +02:00
xbgmsharp
0cf9fa701d Update fronend to v0.1.0-beta8 2024-05-16 10:02:03 +02:00
xbgmsharp
23d2ced60c Add migration 202405.
- Update logbook_update_avg_fn, Update a logbook with avg wind speed
- Update api.logs_view, Add tags to view
- Update api.moorages_stays_view, Add moorage name to view
- Add a merge_logbook_fn
- Update api.login, add support for disable user
- Add Notifications/Reminders for old signalk plugin
2024-05-14 21:22:36 +02:00
xbgmsharp
a1a5f29c16 Update API documentation 2024-05-14 21:13:53 +02:00
xbgmsharp
5fe9a37eee Add more logbook tests 2024-05-14 21:13:35 +02:00
xbgmsharp
2064947457 Update tests, add more logbook utnit tests 2024-05-14 21:12:54 +02:00
xbgmsharp
30b1b8f0a6 Update versions. Debian 12, PostgreSQL 16.3, timescaledb 2.15.0, PostgREST 12.0.3 2024-05-14 21:11:08 +02:00
xbgmsharp
4dcd8d2ea5 Update README 2024-05-04 22:12:53 +02:00
xbgmsharp
670aa2e43a Update openapi documentation 2024-05-01 22:22:18 +02:00
xbgmsharp
1a8790f2a0 Update frontend to 0.1.0-beta7 2024-05-01 22:20:06 +02:00
xbgmsharp
6d97bb1e32 Update the 202404 migration.
Add anonymous check check for timelapse2
Add new tip properties in geojson
2024-04-29 18:13:24 +02:00
xbgmsharp
779cee21ec Update docker compose, make storage as a docker volume 2024-04-29 18:05:26 +02:00
xbgmsharp
e48192e609 Update README 2024-04-29 18:02:05 +02:00
xbgmsharp
f2beead3a7 Update README 2024-04-28 21:29:06 +02:00
xbgmsharp
5df3e1dbd3 Update process_logbook_queue_fn, moorage name part2 2024-04-27 18:13:06 +02:00
xbgmsharp
ed555c1f32 Update process_logbook_queue_fn, add moorage name as note inside in the geojson. this allow to display it during the timelapse replay. 2024-04-27 17:28:52 +02:00
xbgmsharp
02dd68f2d8 Update telemetry 2024-04-27 17:27:58 +02:00
xbgmsharp
d9329705ba Update frontend to 0.1.0-beta6 2024-04-25 21:20:45 +02:00
xbgmsharp
54af136682 Update api.timelapse2_fn, generate a geojson with onlygeometry point to include all the properties 2024-04-25 17:22:34 +02:00
xbgmsharp
6f96a070b8 Update README 2024-04-23 22:20:55 +02:00
xbgmsharp
e79086c4a2 Update README 2024-04-23 22:18:17 +02:00
xbgmsharp
ed417a4c5d Update README 2024-04-23 22:03:20 +02:00
xbgmsharp
546274ce29 Add Funding 2024-04-23 21:41:00 +02:00
xbgmsharp
d5b6072273 Update README 2024-04-23 20:26:03 +02:00
xbgmsharp
e8addd2e9c Add CHANGELOG CODE_OF_CONDUCT CONTRIBUTING 2024-04-23 20:25:45 +02:00
xbgmsharp
f843a6a1f3 Update README 2024-04-23 18:52:39 +02:00
xbgmsharp
478bbf5529 Update README 2024-04-23 16:58:36 +02:00
xbgmsharp
c9523e2f6f Update README 2024-04-23 16:55:25 +02:00
xbgmsharp
5fa85821de Update README 2024-04-23 16:37:28 +02:00
xbgmsharp
7ccef80904 Update README 2024-04-23 16:35:12 +02:00
xbgmsharp
04cc7de245 Update README 2024-04-23 16:27:28 +02:00
xbgmsharp
a75ba105df Update telemetry 2024-04-19 12:37:08 +02:00
xbgmsharp
7eddeefa47 Update frontend 2024-04-19 12:28:41 +02:00
xbgmsharp
314cdc71c7 Update tests, bump version 0.7.2 2024-04-19 12:27:54 +02:00
xbgmsharp
d508ac1662 Update migration 202404.
- Update cron_windy_fn, improve parameters check
- Update email_templates, improve windy error message
- Update delete_logbook_fn, avoid variable not found in subplan target list
- Update tables permissions on api.moorages and api.logbook
- Draft api.timelapse2_fn, new timelapse per point vs linestring
2024-04-19 11:42:27 +02:00
xbgmsharp
6f9956ee46 Add telemetry 2024-04-19 11:41:18 +02:00
xbgmsharp
0b854374ff Update README 2024-04-09 09:36:38 +02:00
xbgmsharp
9b647c9a49 Update README 2024-04-09 09:34:28 +02:00
xbgmsharp
800f0c83e3 Update frontend to 0.1.0-beta5 2024-04-08 19:59:57 +02:00
xbgmsharp
7424fbbe49 Update README 2024-04-07 21:32:45 +02:00
xbgmsharp
e183530435 Update .env.example clarification comments 2024-04-07 21:31:40 +02:00
xbgmsharp
4444f73919 Merge remote-tracking branch 'origin/main' 2024-04-07 15:44:26 +02:00
xbgmsharp
241c70fcb5 Update doc README 2024-04-07 15:41:11 +02:00
xbgmsharp
327324d7c3 Merge pull request #6 from xbgmsharp/dependabot/github_actions/actions/checkout-4
Bump actions/checkout from 3 to 4
2024-04-07 10:20:42 +02:00
xbgmsharp
28ccc9e2da Revert test update 2024-04-07 10:20:11 +02:00
xbgmsharp
966846c93d Update test, fix actually check mermaid exit code 2024-04-07 10:09:15 +02:00
dependabot[bot]
d966d34a29 Bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-07 07:20:12 +00:00
xbgmsharp
8bd7594357 Add dependabot 2024-04-07 09:19:41 +02:00
xbgmsharp
4de47fd011 Update github-actions versions 2024-04-07 09:19:05 +02:00
xbgmsharp
cf0f178d13 Update README 2024-03-25 22:00:31 +01:00
xbgmsharp
b224374e65 Update frontend to 0.1.0-beta4 2024-03-25 21:57:09 +01:00
xbgmsharp
1c441af48c Update delete_account_fn and delete_vessel_fn 2024-03-25 21:56:16 +01:00
xbgmsharp
e5287d74f6 Fix cron_process_no_activity_fn 2024-03-11 09:28:40 +01:00
xbgmsharp
9cf334125a update public.cron_process_no_activity_fn, delete old data 2024-03-10 21:50:05 +01:00
xbgmsharp
1342dba298 Update public.cron_windy_fn, first notifition on first notification
Update delete_vessel_fn, return a jsonb with details
2024-03-09 22:24:09 +01:00
xbgmsharp
7e734c412b Bump up frontend v0.1.0-beta2 2024-03-03 23:09:44 +01:00
xbgmsharp
9ae10b0519 Fix migration, metadata_upsert_trigger_fn 2024-03-03 23:09:18 +01:00
xbgmsharp
ab1b4c7076 Update front-end tests, build web and web_dev 2024-03-03 23:08:46 +01:00
xbgmsharp
edb7bc7dd8 Update test, Bump up version 2024-03-03 12:55:24 +01:00
xbgmsharp
e124bdea44 Update README 2024-03-03 12:41:23 +01:00
xbgmsharp
b944c39507 Update tests, log runTime is now store as ISO duration format. 2024-03-03 12:40:36 +01:00
xbgmsharp
b4791f059c Update logbook_update_extra_json_fn: convert log extra json fields runtime in ISO format and log in NM
Update metadata_upsert_trigger_fn: strip special char from platform metadata
Drop deprecated fn
2024-03-03 12:21:38 +01:00
xbgmsharp
78ff78a6b1 Update OpenAPI 2024-03-02 11:24:26 +01:00
xbgmsharp
b6be05753c Update test, Bump up version 2024-03-02 11:22:58 +01:00
xbgmsharp
30e3797854 Update tests, add convertion output 2024-03-02 11:22:36 +01:00
xbgmsharp
e9dea3b124 Add migration 202403, Add new conversion metersToKnots, Update logbook_update_geojson_fn expose data in nautical units, Update process_lat_lon_fn process moorage to always set a country 2024-03-02 11:19:42 +01:00
xbgmsharp
1a08acda85 Update moorage process to always set country colunm 2024-03-02 11:18:40 +01:00
xbgmsharp
633e73b29d Update tests, add convertion unit test radiantToDegrees, valToPercent, metersToKnots 2024-03-02 11:17:40 +01:00
xbgmsharp
a383d0db53 Bump up timescaledb 2024-02-29 22:30:12 +01:00
xbgmsharp
0116ea4feb Bump up timescaledb 2024-02-29 22:24:57 +01:00
xbgmsharp
92bcf0ffa6 Update public.logbook_update_geojson_fn, add TWD and TWS and navigation.state into log GeoJSON 2024-02-29 22:17:37 +01:00
xbgmsharp
414736909b Update README 2024-02-23 23:49:17 +01:00
xbgmsharp
36173658a0 Update REAME 2024-02-23 21:30:34 +01:00
xbgmsharp
984a7c14da Add Grafana URL 2024-02-23 21:08:22 +01:00
xbgmsharp
48be759e4d Update README 2024-02-23 21:07:29 +01:00
xbgmsharp
e473baa5a0 Update docker, Add build arg 2024-02-22 22:53:50 +01:00
xbgmsharp
9ba79123ec Update README 2024-02-21 16:36:29 +01:00
xbgmsharp
82e7056b0c Update web container, build locally instead of cloud version 2024-02-21 16:28:12 +01:00
xbgmsharp
4829d0b848 Update test, trim users vessel name 2024-02-21 16:25:00 +01:00
xbgmsharp
9069b11f71 Update migrations files. Add license header
Update windy PWS, ensure the require path are present
Update send_notification_fn, fix pushover notificaiton if missing pushover key
Update vessel name, trim name
2024-02-21 15:24:20 +01:00
xbgmsharp
bf3eb3c806 Bump up frontend v0.1.0-beta 2024-02-19 21:44:18 +01:00
xbgmsharp
6e863ca355 Update migration 202402, add cron_prune_otp_fn 2024-02-19 09:32:18 +01:00
xbgmsharp
51128112f3 Update api_version 2024-02-18 18:51:46 +01:00
xbgmsharp
b150d9706f tag release v0.7.0 2024-02-18 18:27:33 +01:00
xbgmsharp
813460da7b Release v0.7.0 2024-02-18 18:04:18 +01:00
xbgmsharp
813b8088f3 Add migrations 2024-02-18 18:00:09 +01:00
xbgmsharp
f90911c523 Update pgcron, rename cron job non depending on the process_queue table 2024-02-18 17:54:05 +01:00
xbgmsharp
790bbb671c Update pgcron default username 2024-02-18 17:51:27 +01:00
xbgmsharp
5455d8246f Update versions, timescaledb, postgis, postgres 2024-02-18 12:59:21 +01:00
xbgmsharp
95d24c538d Change cron windy and alerts to use True wind speed 2024-02-08 10:28:19 +01:00
xbgmsharp
57799c9ee4 Update cron_process_alerts_fn, update alert message 2024-02-07 23:56:11 +01:00
xbgmsharp
437bfd0252 Update cron_process_alerts_fn and cron_process_windy_fn.
When enable for the first time, start sending data from the last 24h.
cron_process_windy_fn, fix main query and percentage thresold due testing and debug
2024-02-07 17:03:52 +01:00
xbgmsharp
dcceab2551 Update alert message 2024-02-06 22:03:36 +01:00
xbgmsharp
cdc2e4e55c Update alert message 2024-02-06 22:02:44 +01:00
xbgmsharp
76bbe29567 Add alert parsing in messages notifications 2024-02-06 21:30:04 +01:00
xbgmsharp
36b8eece52 Update cron process alerts, Add and merge default threshold with user settings 2024-02-06 21:15:52 +01:00
xbgmsharp
a5436479cf Add cron process alerts, send notification when users threshold are in reach with in user interval time frame 2024-02-06 19:41:46 +01:00
xbgmsharp
23ea3bd0d8 Update alert message 2024-02-06 18:11:49 +01:00
xbgmsharp
826566e097 Add conversion helpers, kelvinToCel, radiantToDegrees, valToPercent 2024-02-06 18:11:10 +01:00
xbgmsharp
f942076cc2 Update cron_process_windy_fn, fix main loop query 2024-02-06 13:51:32 +01:00
xbgmsharp
8dba0c21b6 Add alert notification template 2024-02-05 12:54:25 +01:00
xbgmsharp
a96160ef15 Add windy notifications templates 2024-02-05 12:53:42 +01:00
xbgmsharp
ccf91bb832 Add alerts cron job 2024-02-05 12:53:09 +01:00
xbgmsharp
b9993ed28f Add cron job for windy 2024-02-05 12:52:23 +01:00
xbgmsharp
0e5e619625 Add cron process windy, send observations to windy station 2024-02-05 12:51:43 +01:00
xbgmsharp
294a60d13a Update app settings, Expose windy api key 2024-02-05 12:51:07 +01:00
xbgmsharp
b6587b1287 Add windy_pws_py, send data to windy station
Reduce debug from keycloak and grafana integration
2024-02-05 12:50:10 +01:00
xbgmsharp
74512d0bf3 UPDATE REAME 2024-02-01 22:43:47 +01:00
xbgmsharp
322a479b4f Update grafana provisioning, if signalk vessel name is null then use vessel_id as name organisation name 2024-01-31 10:23:21 +01:00
xbgmsharp
c5cba6a59f Bump up frontend v0.0.9 2024-01-30 22:57:36 +01:00
xbgmsharp
22466430ac Release v0.6.1 2024-01-30 22:48:46 +01:00
xbgmsharp
643c16ad3f Update keycloak account status 2024-01-30 22:48:22 +01:00
xbgmsharp
59a3c41b4a Update grafana cron, Add keycloak provisioning to sync vessel_id as user attributs 2024-01-30 22:33:08 +01:00
xbgmsharp
371eb6c720 Update message template. Update grafana notification 2024-01-30 22:32:28 +01:00
xbgmsharp
c2ffe9777c Update get_app_settings_fn, add keycloak URI 2024-01-30 22:31:53 +01:00
xbgmsharp
c0261791f5 Add keycloak python user integration 2024-01-30 21:55:30 +01:00
xbgmsharp
e727954f83 Update api.vessel_details_fn, fix length 2024-01-29 21:45:26 +01:00
xbgmsharp
22b04334f8 Udpatr frontend, dump 0.0.9-beta3 2024-01-26 16:33:42 +01:00
xbgmsharp
a5ec4c0039 Update monitoring_view, remove status 2024-01-26 13:05:44 +01:00
xbgmsharp
6a63f7d02f Update cron comment, Add deprecated comment 2024-01-26 11:32:50 +01:00
xbgmsharp
2fec2e650c Update frontend to latest beta 2024-01-26 11:32:05 +01:00
xbgmsharp
ed8514bfb1 Update Grafana default dashboard. Update uuid for provisioning.
Fix metric order to get display the latest know vessel position vs the first one
2024-01-26 11:31:08 +01:00
xbgmsharp
f25e735674 Update overpass_py return jsonb versus a string when there is no value 2024-01-22 21:55:38 +01:00
xbgmsharp
c1b71cabd8 Fix test versions 2024-01-19 11:41:58 +01:00
xbgmsharp
495e25b838 Release 0.6.0 2024-01-19 11:35:29 +01:00
xbgmsharp
a547271496 Update cron_process_grafana_fn, fix SQL error 2024-01-19 09:23:03 +01:00
xbgmsharp
3a1d0baef8 Update grafana cron job, add notification 2024-01-19 00:13:17 +01:00
xbgmsharp
628de57b5f Update teamplate table, add grafana notification 2024-01-19 00:12:47 +01:00
xbgmsharp
9a5f27d21e Update api.timelapse_fn, fix typo using date. add api.timelapse2_fn to export geojson with notes 2024-01-18 23:54:47 +01:00
xbgmsharp
7892b615e0 Update frontend to latest beta 2024-01-18 22:29:41 +01:00
xbgmsharp
2c2f5d8605 Expose vessel status via API 2024-01-18 22:11:28 +01:00
xbgmsharp
47eda3dcaf Disable stationary detection in pre logbook, still some issue 2024-01-18 22:11:01 +01:00
xbgmsharp
0c0279767f Add comment for process_pre_logbook_fn 2024-01-17 23:03:57 +01:00
xbgmsharp
98e28aacea Updates tests, Update test with rls for anynomous role. Upgrade timescaledb version, 2024-01-17 20:27:23 +01:00
xbgmsharp
b04d336c0d Update permissions, add ROW LEVEL SECURITY for anonymous access.
Allow anonymous role for public access
Allow vessel role to new function for oauth
2024-01-17 20:25:51 +01:00
xbgmsharp
288c458c5a Update anonymous tests 2024-01-17 20:24:58 +01:00
xbgmsharp
d3dd46c834 Renane process_logbook_valid_fn to process_pre_logbook_fn 2024-01-17 20:20:49 +01:00
xbgmsharp
d0bc468ce7 Allow anonymous access for complete timelapse 2024-01-17 20:18:40 +01:00
xbgmsharp
3f51e89303 Update formating 2024-01-16 14:12:13 +01:00
xbgmsharp
000c5651e2 Add PostgREST Media Type Handlers support 2024-01-15 21:44:44 +01:00
xbgmsharp
4bec738826 Add new procedure, api.monitoring_history_fn 2024-01-13 23:23:53 +01:00
xbgmsharp
012812c898 Update eventlogs_view tests, as we skip pre-logbook check from user 2024-01-13 11:45:10 +01:00
xbgmsharp
1c04822cf8 Add cron_process_grafana_fn comment 2024-01-13 11:44:32 +01:00
xbgmsharp
50f018100b Update api.eventlogs_view, ignore pre_logbook check from user 2024-01-12 23:31:00 +01:00
xbgmsharp
13c461a038 Expose metadata platform to frontend 2024-01-12 23:28:16 +01:00
xbgmsharp
8763448523 fix sql versions , update versions 2024-01-12 22:53:21 +01:00
xbgmsharp
e02aaf3676 Update grafana dashboards path 2024-01-12 22:50:57 +01:00
xbgmsharp
ae61072ba4 Update default grafana config and refactor provisioning files 2024-01-12 22:47:28 +01:00
xbgmsharp
242a5554ea 0.6.0-beta 2024-01-12 22:34:00 +01:00
xbgmsharp
39888e1957 Update PostgSail and PostgREST version 2024-01-12 22:33:38 +01:00
xbgmsharp
666f69c42a Update API and SQL documentation 2024-01-12 22:31:19 +01:00
xbgmsharp
77f41251c5 Fix error from 480417917d 2024-01-12 22:22:49 +01:00
xbgmsharp
40a1e0fa39 Fix python error from 0a09d7bbfc 2024-01-12 22:19:40 +01:00
xbgmsharp
5cf2d10757 Fix syntax error from 489fb9562b 2024-01-12 22:13:54 +01:00
xbgmsharp
0dd6410589 Fix docker compose PGRST_SERVER_TIMING_ENABLED contains true 2024-01-12 22:07:16 +01:00
xbgmsharp
682c68a108 Add new postgsail settings app.keycloak_uri 2024-01-12 21:54:51 +01:00
xbgmsharp
5d1db984b8 Update postgrest container with new parameters
Update,fix grafana container, set properly the admin password
2024-01-12 21:53:01 +01:00
xbgmsharp
0a09d7bbfc Update keycloak_py, make uri and host,user,pass dynamic form config 2024-01-12 17:40:59 +01:00
xbgmsharp
14cc4f5ed2 Update cron_process_grafana_fn, add formating and comment for clarification. 2024-01-12 17:39:54 +01:00
xbgmsharp
ff23f5c2ad Update open api documentation 2024-01-10 22:47:01 +01:00
xbgmsharp
489fb9562b Trigger email otp validation only if the user is not coming for the oauth server 2024-01-10 22:39:16 +01:00
xbgmsharp
8c32345342 Feat: Allow signalk plugin webapp to register user and vessel to database via oauth 2024-01-10 22:38:04 +01:00
xbgmsharp
480417917d Feat: initial implementation for oauth support via Keycloak server or other 2024-01-10 22:37:11 +01:00
xbgmsharp
e557ed49a5 Add keycloak_py_fn. This function link the postgsail user with the oauth keycloak server 2024-01-09 22:04:22 +01:00
xbgmsharp
a3475dfe99 Update UUID v7 function 2024-01-09 22:03:45 +01:00
xbgmsharp
e670e11cd5 update ERd diagram 2024-01-07 22:18:44 +01:00
xbgmsharp
88de5003c2 feat: Add new metadata fields platform and configuration. This allow to do pre-defined configuration for well know device like Victron VenusOS decives. 2024-01-07 22:16:52 +01:00
xbgmsharp
c46a428fc2 Update tests, add more invalid metrics to be ignored, add sql metrics count check 2023-12-31 21:13:40 +01:00
xbgmsharp
db3bd6b06f Update metrics_trigger_fn, silently ignore invalid status. 2023-12-31 21:09:42 +01:00
xbgmsharp
4ea7a1b019 Update API documentation 2023-12-29 18:48:11 +01:00
xbgmsharp
ce106074dc Feat: Update tests for pre logbook check and grafana cron job support 2023-12-29 18:26:22 +01:00
xbgmsharp
e7d8229e83 Feat: Add grafana cron job 2023-12-29 18:23:07 +01:00
xbgmsharp
f14342bb07 Fix grafana cron job. update cron_process_grafana_fn 2023-12-29 18:16:59 +01:00
xbgmsharp
c4fbf7682d refactor documentation 2023-12-29 18:15:08 +01:00
xbgmsharp
f8c1f43f48 Fix typo 2023-12-29 12:16:43 +01:00
xbgmsharp
0d5089af2d Add new settings for Grafana HTTP API URL 2023-12-29 11:43:49 +01:00
xbgmsharp
da1952ed31 Feat: Add pre_logbook validation 2023-12-29 11:35:22 +01:00
xbgmsharp
a5d5585366 feat: Add pre logbook check. Split logbook process function 2023-12-29 11:34:19 +01:00
xbgmsharp
5f9a889a44 Feat: Add grafana provisioning on first contact from vessel
Feat: Add pre logbook validation
2023-12-29 11:32:57 +01:00
xbgmsharp
f9719bd174 Feat: Add cron for pre logbook check
Feat: Add cron for Grafana provisioning
2023-12-29 11:28:57 +01:00
xbgmsharp
8d1b8cb389 fix: Update template email message 2023-12-29 11:27:00 +01:00
xbgmsharp
acfd058d3b feat: Add uuid v7 helpers 2023-12-29 11:26:16 +01:00
xbgmsharp
eeae7c40c6 Feat: Add proper Grafana provisioning via HTTP API 2023-12-29 11:24:54 +01:00
xbgmsharp
2bbf27f3ad Update frontend 2023-12-29 11:24:18 +01:00
xbgmsharp
2ba81a935f Update reindex conr_job to use CONCURENTLY options, remove metrics reindex as managed by timescale. 2023-12-03 17:02:05 +01:00
xbgmsharp
0fbac67895 Remove Duplicate Indexes 2023-12-01 23:51:13 +01:00
xbgmsharp
228b234582 Disable remove duplicate index 2023-12-01 22:25:12 +01:00
xbgmsharp
75c8a9506a Update tests, fix check on stay name, add anonymous/public access tests, update badges format 2023-12-01 22:06:01 +01:00
xbgmsharp
2b48a66cd2 Update overpass_py_fn, Enforce area with a proper name tag 2023-12-01 22:01:11 +01:00
xbgmsharp
e642049e93 Update versions.
PostgreSQL-16.1
Postgis-3.4.1
Timescaledb-2.13
PostgSail-0.5.2
2023-12-01 21:59:11 +01:00
xbgmsharp
94e123c95e Update badges processing, update time properties to be at the logbook date 2023-12-01 13:15:12 +01:00
xbgmsharp
9787328990 Update badges, set badges time to log time versus processed time
Add dereprecated comment to unused functions.
2023-11-30 22:16:49 +01:00
xbgmsharp
de62d936d5 Update api.monitoring_view, add new properties wind and speed in to the geojson. 2023-11-30 21:50:21 +01:00
xbgmsharp
293a33da08 Update overpass_py_fn, improve geo location detection, first check against area, then find around with 400m 2023-11-30 21:47:56 +01:00
xbgmsharp
2b105db5c7 Update public.logbook_update_geojson_fn, fix geojson properties for the wind and add notes geojson properties for each Point.
Update process_logbook_queue_fn, improve invalid logbook detection mostly do to multiple GPS, remove logbook with more than 60% of metric are with 100m.
2023-11-30 16:10:50 +01:00
xbgmsharp
af003d5a62 Update api.export_moorages_gpx_fn, fix moorage id and url path.
Update api.delete_logbook_fn, set api.metrics to moored and substract -1 to moorage count
2023-11-30 15:53:50 +01:00
xbgmsharp
ecc9fd6d9f Update comment order 2023-11-30 12:50:46 +01:00
xbgmsharp
df5f667b41 Update README 2023-11-29 22:14:47 +01:00
xbgmsharp
1bfa04a057 Update pgcron jobs.
Fix clean_up process logs.
Add vaccum for auth schema.
Add reindex for api schema.
Run notificiation every minute instead of 2
2023-11-29 22:00:19 +01:00
108 changed files with 22552 additions and 2194 deletions

View File

@@ -6,17 +6,23 @@ POSTGRES_DB=postgres
PGSAIL_AUTHENTICATOR_PASSWORD=password
PGSAIL_GRAFANA_PASSWORD=password
PGSAIL_GRAFANA_AUTH_PASSWORD=password
# SMTP server settings
PGSAIL_EMAIL_FROM=root@localhost
PGSAIL_EMAIL_SERVER=localhost
#PGSAIL_EMAIL_USER= Comment if not use
#PGSAIL_EMAIL_PASS= Comment if not use
# Pushover settings
#PGSAIL_PUSHOVER_APP_TOKEN= Comment if not use
#PGSAIL_PUSHOVER_APP_URL= Comment if not use
# TELEGRAM BOT, ask BotFather
#PGSAIL_TELEGRAM_BOT_TOKEN= Comment if not use
# webapp entrypoint, typically the public DNS or IP
PGSAIL_APP_URL=http://localhost:8080
# API entrypoint from the webapp, typically the public DNS or IP
PGSAIL_API_URL=http://localhost:3000
# POSTGREST ENV Settings
PGRST_DB_URI=postgres://authenticator:${PGSAIL_AUTHENTICATOR_PASSWORD}@db:5432/signalk
# % cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 42 | head -n 1
PGRST_JWT_SECRET=_at_least_32__char__long__random
# Grafana ENV Settings
GF_SECURITY_ADMIN_PASSWORD=password

14
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
# These are supported funding model platforms
github: [xbgmsharp]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

14
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
version: 2
updates:
# Enable version updates for Docker
- package-ecosystem: "docker"
# Look for a `Dockerfile` in the `root` directory
directory: "/"
# Check for updates once a week
schedule:
interval: "weekly"
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly

View File

@@ -21,13 +21,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out the source
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set env
run: cp .env.example .env
- name: Pull Docker images
run: docker-compose pull db api
run: docker compose pull db api
- name: Run PostgSail Database & schemalint
# Environment variables
@@ -41,10 +41,10 @@ jobs:
run: |
set -eu
source .env
docker-compose stop || true
docker-compose rm || true
docker-compose up -d db && sleep 30 && docker-compose up -d api && sleep 5
docker-compose ps -a
docker compose stop || true
docker compose rm || true
docker compose up -d db && sleep 30 && docker compose up -d api && sleep 5
docker compose ps -a
echo ${PGSAIL_API_URL}
curl ${PGSAIL_API_URL}
npm i -D schemalint
@@ -52,4 +52,4 @@ jobs:
- name: Show the logs
if: always()
run: |
docker-compose logs
docker compose logs

View File

@@ -23,16 +23,16 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out the source
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set env
run: cp .env.example .env
- name: Pull Docker images
run: docker-compose pull db api
run: docker compose pull db api
- name: Build Docker images
run: docker-compose -f docker-compose.dev.yml -f docker-compose.yml build tests
run: docker compose -f docker-compose.dev.yml -f docker-compose.yml build tests
- name: Install psql
run: sudo apt install postgresql-client
@@ -49,10 +49,10 @@ jobs:
run: |
set -eu
source .env
docker-compose stop || true
docker-compose rm || true
docker-compose up -d db && sleep 30 && docker-compose up -d api && sleep 5
docker-compose ps -a
docker compose stop || true
docker compose rm || true
docker compose up -d db && sleep 30 && docker compose up -d api && sleep 5
docker compose ps -a
echo ${PGSAIL_API_URL}
curl ${PGSAIL_API_URL}
psql -c "select 1"
@@ -70,4 +70,4 @@ jobs:
- name: Show the logs
if: always()
run: |
docker-compose logs
docker compose logs

View File

@@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
submodules: 'true'
@@ -31,7 +31,11 @@ jobs:
run: docker compose -f docker-compose.dev.yml -f docker-compose.yml pull db api web_tests
- name: Build Docker images
run: docker compose -f docker-compose.dev.yml -f docker-compose.yml build web_dev
run: |
set -eu
source .env
docker compose -f docker-compose.dev.yml -f docker-compose.yml build web_dev
docker compose -f docker-compose.dev.yml -f docker-compose.yml build web
- name: Run PostgSail Web tests
# Environment variables
@@ -45,10 +49,10 @@ jobs:
run: |
set -eu
source .env
docker-compose stop || true
docker-compose rm || true
docker-compose up -d db && sleep 30 && docker-compose up -d api && sleep 5
docker-compose ps -a
docker compose stop || true
docker compose rm || true
docker compose up -d db && sleep 30 && docker compose up -d api && sleep 5
docker compose ps -a
echo "Test PostgSail Web Unit Test"
docker compose -f docker-compose.dev.yml -f docker-compose.yml up -d web_dev && sleep 100
docker compose -f docker-compose.dev.yml -f docker-compose.yml logs web_dev
@@ -63,4 +67,4 @@ jobs:
- name: Show the logs
if: always()
run: |
docker-compose logs
docker compose logs

View File

@@ -20,13 +20,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set env
run: cp .env.example .env
- name: Pull Docker images
run: docker-compose pull db app
run: docker compose pull db app
- name: Run PostgSail Grafana test
# Environment variables
@@ -40,15 +40,16 @@ jobs:
run: |
set -eu
source .env
docker-compose stop || true
docker-compose rm || true
docker-compose up -d db && sleep 30
docker-compose ps -a
docker compose stop || true
docker compose rm || true
docker compose up -d db && sleep 30
docker compose ps -a
echo "Test PostgSail Grafana Unit Test"
docker-compose up -d app && sleep 5
docker-compose ps -a
docker compose up -d app && sleep 5
docker compose ps -a
curl http://localhost:3001/
docker compose exec -i db psql -Uusername signalk -c "select public.cron_process_grafana_fn();"
- name: Show the logs
if: always()
run: |
docker-compose logs
docker compose logs

1
.gitignore vendored
View File

@@ -1,5 +1,6 @@
.DS_Store
.env
docker-compose.mm.yml
initdb/*.csv
initdb/*.no
initdb/*.jwk

1
CHANGELOG.md Normal file
View File

@@ -0,0 +1 @@
## Please see [Releases](https://github.com/xbgmsharp/postgsail/releases) for the release notes.

45
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,45 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project maintainer using any of the [private contact addresses](https://github.com/dec0dOS/amazing-github-template#support). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 1.4, available at <https://www.contributor-covenant.org/version/1/4/code-of-conduct.html>
For answers to common questions about this code of conduct, see <https://www.contributor-covenant.org/faq>

3
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,3 @@
Styleguides
Ensure you code is in lint formatting.

View File

@@ -1,34 +0,0 @@
# PostgSail ERD
The Entity-Relationship Diagram (ERD) provides a graphical representation of database tables, columns, and inter-relationships. ERD can give sufficient information for the database administrator to follow when developing and maintaining the database.
## A global overview
Auto generated Mermaid diagram using [mermerd](https://github.com/KarnerTh/mermerd) and [MermaidJs](https://github.com/mermaid-js/mermaid).
[PostgSail SQL Schema](https://github.com/xbgmsharp/postgsail/tree/main/ERD/postgsail.md "PostgSail SQL Schema")
## Further
There is 3 main schemas:
- API Schema:
- tables
- metrics
- logbook
- ...
- functions
- ...
- Auth Schema:
- tables
- accounts
- vessels
- ...
- functions
- ...
- Public Schema:
- tables
- app_settings
- tpl_messages
- ...
- functions
- ...

View File

@@ -1,246 +0,0 @@
```mermaid
erDiagram
api_logbook {
text _from
double_precision _from_lat
double_precision _from_lng
integer _from_moorage_id "Link api.moorages with api.logbook via FOREIGN KEY and REFERENCES"
timestamp_with_time_zone _from_time "{NOT_NULL}"
text _to
double_precision _to_lat
double_precision _to_lng
integer _to_moorage_id "Link api.moorages with api.logbook via FOREIGN KEY and REFERENCES"
timestamp_with_time_zone _to_time
boolean active
double_precision avg_speed
numeric distance "in NM"
interval duration "Best to use standard ISO 8601"
jsonb extra "computed signalk metrics of interest, runTime, currentLevel, etc"
integer id "{NOT_NULL}"
double_precision max_speed
double_precision max_wind_speed
text name
text notes
geography track_geog "postgis geography type default SRID 4326 Unit: degres"
jsonb track_geojson "store generated geojson with track metrics data using with LineString and Point features, we can not depend api.metrics table"
geometry track_geom "postgis geometry type EPSG:4326 Unit: degres"
text vessel_id "{NOT_NULL}"
}
api_metadata {
boolean active "trigger monitor online/offline"
boolean active
double_precision beam
text client_id
timestamp_with_time_zone created_at "{NOT_NULL}"
double_precision height
integer id "{NOT_NULL}"
double_precision length
numeric mmsi
text name
text plugin_version "{NOT_NULL}"
numeric ship_type
text signalk_version "{NOT_NULL}"
timestamp_with_time_zone time "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text vessel_id "Link auth.vessels with api.metadata via FOREIGN KEY and REFERENCES {NOT_NULL}"
text vessel_id "{NOT_NULL}"
}
api_metrics {
double_precision anglespeedapparent
text client_id
double_precision courseovergroundtrue
double_precision latitude "With CONSTRAINT but allow NULL value to be ignored silently by trigger"
double_precision longitude "With CONSTRAINT but allow NULL value to be ignored silently by trigger"
jsonb metrics
double_precision speedoverground
status status "<sailing,motoring,moored,anchored>"
timestamp_with_time_zone time "{NOT_NULL}"
text vessel_id "{NOT_NULL}"
double_precision windspeedapparent
}
api_moorages {
text country
geography geog "postgis geography type default SRID 4326 Unit: degres"
boolean home_flag
integer id "{NOT_NULL}"
double_precision latitude
double_precision longitude
text name
jsonb nominatim
text notes
jsonb overpass
integer reference_count
integer stay_code "Link api.stays_at with api.moorages via FOREIGN KEY and REFERENCES"
interval stay_duration "Best to use standard ISO 8601"
text vessel_id "{NOT_NULL}"
}
api_stays {
boolean active
timestamp_with_time_zone arrived "{NOT_NULL}"
timestamp_with_time_zone departed
interval duration "Best to use standard ISO 8601"
geography geog "postgis geography type default SRID 4326 Unit: degres"
integer id "{NOT_NULL}"
double_precision latitude
double_precision longitude
integer moorage_id "Link api.moorages with api.stays via FOREIGN KEY and REFERENCES"
text name
text notes
integer stay_code "Link api.stays_at with api.stays via FOREIGN KEY and REFERENCES"
text vessel_id "{NOT_NULL}"
}
api_stays_at {
text description "{NOT_NULL}"
integer stay_code "{NOT_NULL}"
}
auth_accounts {
timestamp_with_time_zone connected_at "{NOT_NULL}"
timestamp_with_time_zone created_at "{NOT_NULL}"
citext email "{NOT_NULL}"
text first "User first name with CONSTRAINT CHECK {NOT_NULL}"
text last "User last name with CONSTRAINT CHECK {NOT_NULL}"
text pass "{NOT_NULL}"
jsonb preferences
integer public_id "{NOT_NULL}"
name role "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text user_id "{NOT_NULL}"
}
auth_otp {
text otp_pass "{NOT_NULL}"
timestamp_with_time_zone otp_timestamp
smallint otp_tries "{NOT_NULL}"
citext user_email "{NOT_NULL}"
}
auth_vessels {
timestamp_with_time_zone created_at "{NOT_NULL}"
numeric mmsi
text name "{NOT_NULL}"
citext owner_email "{NOT_NULL}"
name role "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text vessel_id "{NOT_NULL}"
}
public_aistypes {
text description
numeric id
}
public_app_settings {
text name "application settings name key {NOT_NULL}"
text value "application settings value {NOT_NULL}"
}
public_badges {
text description
text name
}
public_email_templates {
text email_content
text email_subject
text name
text pushover_message
text pushover_title
}
public_geocoders {
text name
text reverse_url
text url
}
public_iso3166 {
text alpha_2
text alpha_3
text country
integer id
}
public_mid {
text country
integer country_id
numeric id
}
public_ne_10m_geography_marine_polys {
text changed
text featurecla
geometry geom
integer gid "{NOT_NULL}"
text label
double_precision max_label
double_precision min_label
text name
text name_ar
text name_bn
text name_de
text name_el
text name_en
text name_es
text name_fa
text name_fr
text name_he
text name_hi
text name_hu
text name_id
text name_it
text name_ja
text name_ko
text name_nl
text name_pl
text name_pt
text name_ru
text name_sv
text name_tr
text name_uk
text name_ur
text name_vi
text name_zh
text name_zht
text namealt
bigint ne_id
text note
smallint scalerank
text wikidataid
}
public_process_queue {
text channel "{NOT_NULL}"
integer id "{NOT_NULL}"
text payload "{NOT_NULL}"
timestamp_with_time_zone processed
text ref_id "either user_id or vessel_id {NOT_NULL}"
timestamp_with_time_zone stored "{NOT_NULL}"
}
public_spatial_ref_sys {
character_varying auth_name
integer auth_srid
character_varying proj4text
integer srid "{NOT_NULL}"
character_varying srtext
}
api_logbook }o--|| api_metadata : ""
api_logbook }o--|| api_moorages : ""
api_logbook }o--|| api_moorages : ""
api_metadata }o--|| auth_vessels : ""
api_metrics }o--|| api_metadata : ""
api_moorages }o--|| api_metadata : ""
api_stays }o--|| api_metadata : ""
api_moorages }o--|| api_stays_at : ""
api_stays }o--|| api_moorages : ""
api_stays }o--|| api_stays_at : ""
auth_otp |o--|| auth_accounts : ""
auth_vessels |o--|| auth_accounts : ""
```

271
README.md
View File

@@ -1,23 +1,88 @@
# PostgSail
<br/>
<p align="center">
<a href="https://github.com/xbgmsharp/postgsail">
<img src="https://iot.openplotter.cloud/android-chrome-192x192.png" alt="Logo" width="80" height="80">
</a>
Effortless cloud based solution for storing and sharing your SignalK data. Allow you to effortlessly log your sails and monitor your boat with historical data.
<h3 align="center">PostgSail</h3>
<p align="center">
PostgSail is an open-source alternative to traditional vessel data management!
<br/>
<br/>
<a href="https://github.com/xbgmsharp/postgsail/blob/main/docs/README.md"><strong>Explore the docs »</strong></a>
<br/>
<br/>
<a href="#about-the-project">View Demo</a>
.
<a href="https://github.com/xbgmsharp/postgsail/issues">Report Bug</a>
.
<a href="https://github.com/xbgmsharp/postgsail/issues">Request Feature</a>
.
<a href="https://xbgmsharp.github.io/postgsail/">Website</a>
.
<a href="https://github.com/sponsors/xbgmsharp">Sponsors</a>
.
<a href="https://discord.gg/uuZrwz4dCS">Discord</a>
.
<a href="https://deepwiki.com/xbgmsharp/postgsail/">DeepWiki</a>
</p>
</p>
[![release](https://img.shields.io/github/release/xbgmsharp/postgsail?include_prereleases=&sort=semver&color=blue)](https://github.com/xbgmsharp/postgsail/releases/latest)
[![License](https://img.shields.io/github/license/xbgmsharp/postgsail)](#license)
[![issues - postgsail](https://img.shields.io/github/issues/xbgmsharp/postgsail)](https://github.com/xbgmsharp/postgsail/issues)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com)
![Contributors](https://img.shields.io/github/contributors/xbgmsharp/postgsail?color=dark-green)
[![GitHub Repo stars](https://img.shields.io/github/stars/xbgmsharp/postgsail?style=social)](https://github.com/xbgmsharp/postgsail/stargazers)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/xbgmsharp/postgsail)
[![Test services db, api](https://github.com/xbgmsharp/postgsail/actions/workflows/db-test.yml/badge.svg)](https://github.com/xbgmsharp/postgsail/actions/workflows/db-test.yml)
[![Test services db, api, web](https://github.com/xbgmsharp/postgsail/actions/workflows/frontend-test.yml/badge.svg)](https://github.com/xbgmsharp/postgsail/actions/workflows/frontend-test.yml)
[![Test services db, grafana](https://github.com/xbgmsharp/postgsail/actions/workflows/grafana-test.yml/badge.svg)](https://github.com/xbgmsharp/postgsail/actions/workflows/grafana-test.yml)
signalk-postgsail:
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/signalk-postgsail.svg)](https://github.com/xbgmsharp/signalk-postgsail/releases/latest)
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/signalk-postgsail.svg?color=blue)](https://github.com/xbgmsharp/signalk-postgsail/releases/latest)
postgsail-backend:
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/postgsail.svg?color=blue)](https://github.com/xbgmsharp/postgsail/releases/latest)
postgsail-frontend:
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/vuestic-postgsail.svg)](https://github.com/xbgmsharp/vuestic-postgsail/releases/latest)
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/vuestic-postgsail.svg?color=blue)](https://github.com/xbgmsharp/vuestic-postgsail/releases/latest)
postgsail-telegram-bot:
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/postgsail-telegram-bot.svg)](https://github.com/xbgmsharp/postgsail-telegram-bot/releases/latest)
[![GitHub Release](https://img.shields.io/github/release/xbgmsharp/postgsail-telegram-bot.svg?color=blue)](https://github.com/xbgmsharp/postgsail-telegram-bot/releases/latest)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/8124/badge)](https://www.bestpractices.dev/projects/8124)
## Table Of Contents
- [Table Of Contents](#table-of-contents)
- [About The Project](#about-the-project)
- [Features](#features)
- [Cloud-hosted PostgSail](#cloud-hosted-postgsail)
- [On-Premise](#on-premise)
- [Roadmap](#roadmap)
- [Contributing](#contributing)
- [Creating A Pull Request](#creating-a-pull-request)
- [License](#license)
- [Acknowledgements](#acknowledgements)
## About The Project
https://github.com/xbgmsharp/signalk-postgsail/assets/1498985/b2669c39-11ad-4a50-9f91-9397f9057ee8
Effortless cloud based solution for storing and sharing your SignalK data. Allow you to effortlessly log your sails and monitor your boat with historical data.
Here's how:
It is all about SQL, object-relational, time-series, spatial databases with a bit of python.
PostgSail is an open-source alternative to traditional vessel data management.
It is based on a well known open-source technology stack, Signalk, PostgreSQL, TimescaleDB, PostGIS, PostgREST. It does perfectly integrate with standard monitoring tool stack like Grafana.
To understand the why and how, you might want to read [Why.md](https://github.com/xbgmsharp/postgsail/blob/main/Why.md)
## Features
@@ -26,6 +91,7 @@ postgsail-telegram-bot:
- Timelapse video your trips, with or without time control.
- Add custom notes to your logs.
- Export to CSV, GPX, GeoJSON, KML and download your logs.
- Export your logs as image (PNG) or video (MP4).
- Aggregate your trip statistics: Longest voyage, time spent at anchorages, home ports etc.
- See your moorages on a global map, with incoming and outgoing voyages from each trip.
- Monitor your boat (position, depth, wind, temperature, battery charge status, etc.) remotely.
@@ -35,191 +101,60 @@ postgsail-telegram-bot:
- Offline mode.
- Low Bandwidth mode.
- Awesome statistics and graphs.
- Create and manage your own dashboards.
- Windy PWS (Personal Weather Station).
- Engine Hours Logger.
- Polar performance.
- Anything missing? just ask!
## Context
## Cloud-hosted PostgSail
It is all about SQL, object-relational, time-series, spatial databases with a bit of python.
Remove the hassle of running PostgSail yourself. Here you can skip the technical setup, the maintenance work and server costs by getting PostgSail on our reliable and secure PostgSail Cloud. Register and try for free at [iot.openplotter.cloud](https://iot.openplotter.cloud/).
PostgSail is an open-source alternative to traditional vessel data management.
It is based on a well known open-source technology stack, Signalk, PostgreSQL, TimescaleDB, PostGIS, PostgREST. It does perfectly integrate with standard monitoring tool stack like Grafana.
PostgSail Cloud is Open Source and free for personal use with a single vessel. If wish to manage multiple boats contact us.
To understand the why and how, you might want to read [Why.md](https://github.com/xbgmsharp/postgsail/tree/main/Why.md)
PostgSail is free to use, but is not free to make or host. The stability and accuracy of PostgSail depends on its volunteers and donations from its users. Please consider making an annual recurring gift to PostgSail.
## Architecture
A simple scalable architecture:
## On-Premise
![Architecture overview](https://raw.githubusercontent.com/xbgmsharp/postgsail/main/PostgSail.png "Architecture overview")
Self host postgSail where you want and how you want. There are no restrictions, youre in full control. [Install Guide](https://github.com/xbgmsharp/postgsail/blob/main/docs/README.md)
For more clarity and visibility the complete [Entity-Relationship Diagram (ERD)](https://github.com/xbgmsharp/postgsail/tree/main/ERD/README.md) is export as PNG and SVG file.
PostgSail is free to use, but is not free to make or host. The stability and accuracy of PostgSail depends on its volunteers and donations from its users. Please consider making an annual recurring gift to PostgSail.
## Cloud
## Roadmap
If you prefer not to install or administer your instance of PostgSail, hosted versions of PostgSail are available in the cloud of your choice.
See the [open issues](https://github.com/xbgmsharp/postgsail/issues) for a list of proposed features (and known issues).
### The cloud advantage.
Join the community, Get support and exchange on [Discord](https://discord.gg/uuZrwz4dCS). Missing a feature? just ask!
Hosted and fullymanaged options for PostgSail, designed for all your deployment and business needs. Register and try for free at https://iot.openplotter.cloud/.
## Contributing
## Using PostgSail
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
* If you have suggestions for features, feel free to [open an issue](https://github.com/xbgmsharp/postgsail/issues/new) to discuss it, or directly create a pull request with necessary changes.
* Please make sure you check your spelling and grammar.
* Create individual PR for each suggestion.
* Please also read through the [Code Of Conduct](https://github.com/xbgmsharp/postgsail/blob/main/CODE_OF_CONDUCT.md) before posting your first idea as well.
A full-featured development environment.
### Creating A Pull Request
#### With CodeSandbox
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
- Develop on [![CodeSandbox Ready-to-Code](https://img.shields.io/badge/CodeSandbox-Ready--to--Code-blue?logo=codesandbox)](https://codesandbox.io/p/github/xbgmsharp/postgsail/main)
- or via [direct link](https://codesandbox.io/p/github/xbgmsharp/postgsail/main)
## License
#### With DevPod
Distributed under the Apache License Version 2.0. See [LICENSE](https://github.com/xbgmsharp/postgsail/blob/main/LICENSE) for more information.
- [![Open in DevPod!](https://devpod.sh/assets/open-in-devpod.svg)](https://devpod.sh/open#https://github.com/xbgmsharp/postgsail/&workspace=postgsail&provider=docker&ide=openvscode)
- or via [direct link](https://devpod.sh/open#https://github.com/xbgmsharp/postgsail&workspace=postgsail&provider=docker&ide=openvscode)
## Acknowledgements
#### With Docker Dev Environments
- [Open in Docker dev-envs!](https://open.docker.com/dashboard/dev-envs?url=https://github.com/xbgmsharp/postgsail/)
### pre-deploy configuration
To get these running, copy `.env.example` and rename to `.env` then set the value accordingly.
```bash
# cp .env.example .env
```
Notice, that `PGRST_JWT_SECRET` must be at least 32 characters long.
`$ head /dev/urandom | tr -dc A-Za-z0-9 | head -c 42 ; echo ''`
```bash
# nano .env
```
### Deploy
By default there is no network set and all data are store in a docker volume.
You can update the default settings by editing `docker-compose.yml` and `docker-compose.dev.yml` to your need.
First let's initialize the database.
#### Step 1. Initialize database
First let's import the SQL schema, execute:
```bash
$ docker-compose up db
```
#### Step 2. Start backend (db, api)
Then launch the full stack (db, api) backend, execute:
```bash
$ docker-compose up db api
```
The API should be accessible via port HTTP/3000.
The database should be accessible via port TCP/5432.
You can connect to the database via a web gui like [pgadmin](https://www.pgadmin.org/) or you can use a client [dbeaver](https://dbeaver.io/).
### SQL Configuration
Check and update your postgsail settings via SQL in the table `app_settings`:
```sql
SELECT * FROM app_settings;
```
```sql
UPDATE app_settings
SET
value = 'new_value'
WHERE name = 'app.email_server';
```
### Ingest data
Next, to ingest data from signalk, you need to install [signalk-postgsail](https://github.com/xbgmsharp/signalk-postgsail) plugin on your signalk server instance.
Also, if you like, you can import saillogger data using the postgsail helpers, [postgsail-helpers](https://github.com/xbgmsharp/postgsail-helpers).
You might want to import your influxdb1 data as well, [outflux](https://github.com/timescale/outflux).
For InfluxDB 2.x and 3.x. You will need to enable the 1.x APIs to use them. Consult the InfluxDB documentation for more details.
Last, if you like, you can import the sample data from Signalk NMEA Plaka by running the tests.
If everything goes well all tests pass successfully and you should receive a few notifications by email or PushOver or Telegram.
[End-to-End (E2E) Testing.](https://github.com/xbgmsharp/postgsail/blob/main/tests/)
```
$ docker-compose up tests
```
### API Documentation
The OpenAPI description output depends on the permissions of the role that is contained in the JWT role claim.
Other applications can also use the [PostgSAIL API](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/xbgmsharp/postgsail/main/openapi.json).
API anonymous:
```
$ curl http://localhost:3000/
```
API user_role:
```
$ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_login_or_signup_fn'
```
API vessel_role:
```
$ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_register_vessel_fn'
```
#### API main workflow
Check the [End-to-End (E2E) test sample](https://github.com/xbgmsharp/postgsail/blob/main/tests/).
### Docker dependencies
`docker-compose` is used to start environment dependencies. Dependencies consist of 3 containers:
- `timescaledb-postgis` alias `db`, PostgreSQL with TimescaleDB extension along with the PostGIS extension.
- `postgrest` alias `api`, Standalone web server that turns your PostgreSQL database directly into a RESTful API.
- `grafana` alias `app`, visualize and monitor your data
### Optional docker images
- [pgAdmin](https://hub.docker.com/r/dpage/pgadmin4), web UI to monitor and manage multiple PostgreSQL
- [Swagger](https://hub.docker.com/r/swaggerapi/swagger-ui), web UI to visualize documentation from PostgREST
```
docker-compose -f docker-compose-optional.yml up
```
### Software reference
Out of the box iot platform using docker with the following software:
An out of the box IoT platform using Docker (could be extend to K3 or K8) with the following software:
- [Signal K server, a Free and Open Source universal marine data exchange format](https://signalk.org)
- [PostgreSQL, open source object-relational database system](https://postgresql.org)
- [TimescaleDB, Time-series data extends PostgreSQL](https://www.timescale.com)
- [PostGIS, a spatial database extender for PostgreSQL object-relational database.](https://postgis.net/)
- [MobilityDB, An open source geospatial trajectory data management & analysis platform.](https://mobilitydb.com/)
- [Grafana, open observability platform | Grafana Labs](https://grafana.com)
### Support
To get support, please create new [issue](https://github.com/xbgmsharp/postgsail/issues).
There is more likely security flows and bugs.
### Contribution
I'm happy to accept Pull Requests!
Feel free to contribute.
### License
This is a free software, Apache License Version 2.0.
- And many more

View File

@@ -1,5 +1,3 @@
version: "3.9"
services:
db:
image: xbgmsharp/timescaledb-postgis
@@ -18,7 +16,7 @@ services:
ports:
- "5432:5432"
volumes:
- ./db-data:/var/lib/postgresql/data
- postgres-data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
logging:
options:
@@ -39,6 +37,7 @@ services:
- "db:database"
ports:
- "3000:3000"
- "3003:3003"
env_file: .env
environment:
PGRST_DB_SCHEMA: api
@@ -46,8 +45,13 @@ services:
PGRST_OPENAPI_SERVER_PROXY_URI: http://127.0.0.1:3000
PGRST_DB_PRE_REQUEST: public.check_jwt
PGRST_DB_POOL: 20
PGRST_DB_POOL_MAX_IDLETIME: 60
PGRST_DB_POOL_ACQUISITION_TIMEOUT: 20
PGRST_DB_URI: ${PGRST_DB_URI}
PGRST_JWT_SECRET: ${PGRST_JWT_SECRET}
PGRST_SERVER_TIMING_ENABLED: 1
PGRST_DB_MAX_ROWS: 500
PGRST_JWT_CACHE_MAX_LIFETIME: 3600
depends_on:
- db
logging:
@@ -67,18 +71,17 @@ services:
links:
- "db:database"
volumes:
- data:/var/lib/grafana
- data:/var/log/grafana
- grafana-data:/var/lib/grafana
- grafana-data:/var/log/grafana
- ./grafana:/etc/grafana
ports:
- "3001:3000"
env_file: .env
environment:
- GF_INSTALL_PLUGINS=pr0ps-trackmap-panel,fatcloud-windrose-panel
- GF_SECURITY_ADMIN_PASSWORD=${PGSAIL_GRAFANA_PASSWORD}
- GF_USERS_ALLOW_SIGN_UP=false
- GF_SMTP_ENABLED=false
- PGSAIL_GRAFANA_URI=db:5432
- PGSAIL_GRAFANA_PASSWORD=${PGSAIL_GRAFANA_PASSWORD}
depends_on:
- db
logging:
@@ -110,18 +113,30 @@ services:
max-size: 10m
web:
image: xbgmsharp/postgsail-vuestic
image: vuestic-postgsail
build:
context: https://github.com/xbgmsharp/vuestic-postgsail.git#live
dockerfile: Dockerfile
args:
- VITE_PGSAIL_URL=${PGSAIL_API_URL}
- VITE_APP_INCLUDE_DEMOS=false
- VITE_APP_BUILD_VERSION=true
- VITE_APP_TITLE=${VITE_APP_TITLE}
- VITE_GRAFANA_URL=${VITE_GRAFANA_URL}
hostname: web
container_name: web
restart: unless-stopped
links:
- "api:postgrest"
ports:
- 8080:8080
env_file: .env
environment:
- VITE_PGSAIL_URL=${PGSAIL_API_URL}
- VITE_APP_INCLUDE_DEMOS=false
- VITE_APP_BUILD_VERSION=true
- VITE_APP_TITLE=${VITE_APP_TITLE}
- VITE_GRAFANA_URL=${VITE_GRAFANA_URL}
depends_on:
- db
- api
@@ -130,4 +145,5 @@ services:
max-size: 10m
volumes:
data: {}
grafana-data: {}
postgres-data: {}

136
docs/ERD/README.md Normal file
View File

@@ -0,0 +1,136 @@
# PostgSail ERD
The Entity-Relationship Diagram (ERD) provides a graphical representation of database tables, columns, and inter-relationships. ERD can give sufficient information for the database administrator to follow when developing and maintaining the database.
## A global overview
Auto generated Mermaid diagram using [mermerd](https://github.com/KarnerTh/mermerd) and [MermaidJs](https://github.com/mermaid-js/mermaid).
[PostgSail SQL Schema](https://github.com/xbgmsharp/postgsail/tree/main/docs/ERD/postgsail.md "PostgSail SQL Schema")
## Further
There is 3 main schemas into the signalk database:
- API Schema:
- tables
- metrics
- logbook
- ...
- functions
- ...
- Auth Schema:
- tables
- accounts
- vessels
- ...
- functions
- ...
- Public Schema:
- tables
- app_settings
- tpl_messages
- ...
- functions
- ...
## Overview
- Insert data into table metadata from API using PostgREST
- Insert data into table metrics from API using PostgREST
- TimescaleDB Hypertable to store signalk metrics
- pgsql functions to generate logbook, stays, moorages
- CRON functions to process logbook, stays, moorages
- python functions for geo reverse and send notification via email and/or pushover
- Views statistics, timelapse, monitoring, logs
- Always store time in UTC
## Ingest flowchart
```mermaid
graph LR
A[SignalK] -- HTTP POST --> B{PostgREST}
B -- SQL --> C{PostgreSQL}
C --> D((metadata trigger))
C --> E((metrics trigger))
D --> F{tbl.metadata}
E --> G{tbl.metrics}
E --> H{tbl.logs}
E --> I{tbl.stays}
```
## pg_cron flowchart
```mermaid
flowchart TD
A[pg_cron] --> B((cron_new_notification))
A --> C((cron_pre_logbook))
A --> D((cron_new_logbook))
A --> E((cron_new_stay))
A --> F((cron_monitor_offline))
A --> G((cron_monitor_online))
C --> K{Validate logbook details}
D --> L{Update logbook details}
E --> M{Update stay details}
L --> N{Update Moorages details}
M --> N{Update Moorages details}
B --> O{Update account,vessel,otp}
F --> P{Update metadata}
G --> P
A --> Q((cron_post_logbook))
Q --> R{QGIS and notification}
A --> S((cron_video))
A --> U((cron_alert))
S --> T{notification}
U --> T{notification}
```
cron job are not process by default because if you don't have the correct settings set (SMTP, PushOver, Telegram), you might enter in a loop with error and you could be blocked or banned from the external services.
Therefor by default they are no active job as it require external configuration settings (SMTP, PushOver, Telegram).
To activate all cron.job run the following SQL command:
```sql
UPDATE cron.job SET active = True;
```
Be sure to review your postgsail settings via SQL in the table `app_settings`:
```sql
SELECT * FROM public.app_settings;
```
### How to bypass OTP for a local install?
To can skip the otp process, add or update the following json key value to the account preference.
```json
"email_valid": true
```
SQL query
```sql
UPDATE auth.accounts
SET preferences='{"email_valid": true}'::jsonb || preferences
WHERE email='your.email@domain.com';
```
OTP is created and sent by email using a cron in postgres/cron/job.
```sql
SELECT * FROM auth.otp;
```
Accounts are store in table signalk/auth/accounts
```sql
SELECT * FROM auth.accounts;
```
You should have an history in table signalk/public/process_queue
```sql
SELECT * from public.process_queue;
```
### How to turn off signups
If you just want to use this as a standalone application and don't want people to be able to sign up for an account.
```SQL
REVOKE execute on function api.signup(text,text,text,text) to api_anonymous;
```
### How to disable completely anonymous access
If you just want to use this as a standalone application and don't want people to be able to access public account.
```SQL
REVOKE SELECT ON ALL TABLES IN SCHEMA api TO api_anonymous;
```

314
docs/ERD/postgsail.md Normal file
View File

@@ -0,0 +1,314 @@
```mermaid
erDiagram
api_logbook {
text _from
double_precision _from_lat
double_precision _from_lng
integer _from_moorage_id "Link api.moorages with api.logbook via FOREIGN KEY and REFERENCES"
timestamp_with_time_zone _from_time "{NOT_NULL}"
text _to
double_precision _to_lat
double_precision _to_lng
integer _to_moorage_id "Link api.moorages with api.logbook via FOREIGN KEY and REFERENCES"
timestamp_with_time_zone _to_time
boolean active
double_precision avg_speed
numeric distance "Distance in nautical miles (NM)"
interval duration "Duration in ISO 8601 format"
jsonb extra "Computed SignalK metrics such as runtime, current level, etc."
integer id "{NOT_NULL}"
double_precision max_speed
double_precision max_wind_speed
text name
text notes
tgeogpoint trip "MobilityDB trajectory"
tfloat trip_batt_charge "Battery Charge"
tfloat trip_batt_voltage "Battery Voltage"
tfloat trip_cog "courseovergroundtrue"
tfloat trip_depth "Depth"
tfloat trip_heading "heading True"
tfloat trip_hum_out "Humidity outside"
ttext trip_notes
tfloat trip_pres_out "Pressure outside"
tfloat trip_sog "speedoverground"
tfloat trip_solar_power "solar powerPanel"
tfloat trip_solar_voltage "solar voltage"
ttext trip_status
tfloat trip_tank_level "Tank currentLevel"
tfloat trip_temp_out "Temperature outside"
tfloat trip_temp_water "Temperature water"
tfloat trip_twa "windspeedapparent"
tfloat trip_twd "truewinddirection"
tfloat trip_tws "truewindspeed"
text vessel_id "{NOT_NULL}"
}
api_metadata {
boolean active "trigger monitor online/offline"
boolean active
jsonb available_keys "Signalk paths with unit for custom mapping"
jsonb available_keys
double_precision beam
jsonb configuration "Signalk path mapping for metrics"
jsonb configuration
timestamp_with_time_zone created_at "{NOT_NULL}"
double_precision height
text ip "Store vessel ip address"
text ip
double_precision length
text mmsi
text name
text platform
text plugin_version "{NOT_NULL}"
numeric ship_type
text signalk_version "{NOT_NULL}"
timestamp_with_time_zone time "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text vessel_id "Link auth.vessels with api.metadata via FOREIGN KEY and REFERENCES {NOT_NULL}"
text vessel_id "{NOT_NULL}"
}
api_metadata_ext {
timestamp_with_time_zone created_at "{NOT_NULL}"
bytea image "Store user boat image in bytea format"
text image_b64
text image_type "Store user boat image type in text format"
timestamp_with_time_zone image_updated_at
text image_url
text make_model "Store user make & model in text format"
text polar "Store polar data in CSV notation as used on ORC sailboat data"
timestamp_with_time_zone polar_updated_at
text vessel_id "{NOT_NULL}"
}
api_metrics {
double_precision anglespeedapparent
text client_id "Deprecated client_id to be removed"
double_precision courseovergroundtrue
double_precision latitude "With CONSTRAINT but allow NULL value to be ignored silently by trigger"
double_precision longitude "With CONSTRAINT but allow NULL value to be ignored silently by trigger"
jsonb metrics
double_precision speedoverground
text status
timestamp_with_time_zone time "{NOT_NULL}"
text vessel_id "{NOT_NULL}"
double_precision windspeedapparent
}
api_moorages {
text country
geography geog "postgis geography type default SRID 4326 Unit: degres"
boolean home_flag
integer id "{NOT_NULL}"
double_precision latitude
double_precision longitude
text name
jsonb nominatim
text notes
jsonb overpass
integer stay_code "Link api.stays_at with api.moorages via FOREIGN KEY and REFERENCES"
text vessel_id "{NOT_NULL}"
}
api_stays {
boolean active
timestamp_with_time_zone arrived "{NOT_NULL}"
timestamp_with_time_zone departed
interval duration "Best to use standard ISO 8601"
geography geog "postgis geography type default SRID 4326 Unit: degres"
integer id "{NOT_NULL}"
double_precision latitude
double_precision longitude
integer moorage_id "Link api.moorages with api.stays via FOREIGN KEY and REFERENCES"
text name
text notes
integer stay_code "Link api.stays_at with api.stays via FOREIGN KEY and REFERENCES"
text vessel_id "{NOT_NULL}"
}
api_stays_at {
text description "{NOT_NULL}"
integer stay_code "{NOT_NULL}"
}
api_stays_ext {
timestamp_with_time_zone created_at "{NOT_NULL}"
bytea image "Store stays image in bytea format"
text image_b64
text image_type "Store stays image type in text format"
timestamp_with_time_zone image_updated_at
text image_url
integer stay_id "{NOT_NULL}"
text vessel_id "{NOT_NULL}"
}
auth_accounts {
timestamp_with_time_zone connected_at "{NOT_NULL}"
timestamp_with_time_zone created_at "{NOT_NULL}"
citext email "{NOT_NULL}"
text first "User first name with CONSTRAINT CHECK {NOT_NULL}"
integer id "{NOT_NULL}"
text last "User last name with CONSTRAINT CHECK {NOT_NULL}"
text pass "{NOT_NULL}"
jsonb preferences
name role "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text user_id "{NOT_NULL}"
}
auth_otp {
text otp_pass "{NOT_NULL}"
timestamp_with_time_zone otp_timestamp
smallint otp_tries "{NOT_NULL}"
citext user_email "{NOT_NULL}"
}
auth_users {
timestamp_with_time_zone connected_at "{NOT_NULL}"
timestamp_with_time_zone created_at "{NOT_NULL}"
name email "{NOT_NULL}"
text first "{NOT_NULL}"
name id "{NOT_NULL}"
text last "{NOT_NULL}"
jsonb preferences
name role "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text user_id "{NOT_NULL}"
}
auth_vessels {
timestamp_with_time_zone created_at "{NOT_NULL}"
numeric mmsi "MMSI can be optional but if present must be a valid one and unique but must be in numeric range between 100000000 and 800000000"
text name "{NOT_NULL}"
citext owner_email "{NOT_NULL}"
name role "{NOT_NULL}"
timestamp_with_time_zone updated_at "{NOT_NULL}"
text vessel_id "{NOT_NULL}"
}
public_aistypes {
text description
numeric id
}
public_app_settings {
text name "application settings name key {NOT_NULL}"
text value "application settings value {NOT_NULL}"
}
public_badges {
text description
text name
}
public_email_templates {
text email_content
text email_subject
text name
text pushover_message
text pushover_title
}
public_geocoders {
text name
text reverse_url
text url
}
public_iso3166 {
text alpha_2
text alpha_3
text country
integer id
}
public_mid {
text country
integer country_id
numeric id
}
public_mobilitydb_opcache {
integer ltypnum
oid opid
integer opnum
integer rtypnum
}
public_ne_10m_geography_marine_polys {
text changed
text featurecla
geometry geom
integer gid "{NOT_NULL}"
text label
double_precision max_label
double_precision min_label
text name
text name_ar
text name_bn
text name_de
text name_el
text name_en
text name_es
text name_fa
text name_fr
text name_he
text name_hi
text name_hu
text name_id
text name_it
text name_ja
text name_ko
text name_nl
text name_pl
text name_pt
text name_ru
text name_sv
text name_tr
text name_uk
text name_ur
text name_vi
text name_zh
text name_zht
text namealt
bigint ne_id
text note
smallint scalerank
text wikidataid
}
public_process_queue {
text channel "{NOT_NULL}"
integer id "{NOT_NULL}"
text payload "{NOT_NULL}"
timestamp_with_time_zone processed
text ref_id "either user_id or vessel_id {NOT_NULL}"
timestamp_with_time_zone stored "{NOT_NULL}"
}
public_spatial_ref_sys {
character_varying auth_name
integer auth_srid
character_varying proj4text
integer srid "{NOT_NULL}"
character_varying srtext
}
api_logbook }o--|| api_metadata : ""
api_logbook }o--|| api_moorages : ""
api_logbook }o--|| api_moorages : ""
api_logbook }o--|| api_moorages : ""
api_logbook }o--|| api_moorages : ""
api_metadata }o--|| auth_vessels : ""
api_metadata_ext |o--|| api_metadata : ""
api_metrics }o--|| api_metadata : ""
api_moorages }o--|| api_metadata : ""
api_stays }o--|| api_metadata : ""
api_stays_ext }o--|| api_metadata : ""
api_moorages }o--|| api_stays_at : ""
api_stays }o--|| api_moorages : ""
api_stays }o--|| api_stays_at : ""
api_stays_ext |o--|| api_stays : ""
auth_otp |o--|| auth_accounts : ""
auth_vessels }o--|| auth_accounts : ""
```

View File

Before

Width:  |  Height:  |  Size: 360 KiB

After

Width:  |  Height:  |  Size: 360 KiB

View File

Before

Width:  |  Height:  |  Size: 222 KiB

After

Width:  |  Height:  |  Size: 222 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 195 KiB

After

Width:  |  Height:  |  Size: 195 KiB

213
docs/README.md Normal file
View File

@@ -0,0 +1,213 @@
## Architecture
Efficient, simple and scalable architecture.
![Architecture overview](https://raw.githubusercontent.com/xbgmsharp/postgsail/main/PostgSail.png "Architecture overview")
For more clarity and visibility the complete [Entity-Relationship Diagram (ERD)](https://github.com/xbgmsharp/postgsail/blob/main/docs/ERD/README.md) is export as Mermaid, PNG and SVG file.
## Using PostgSail
### Development
A full-featured development environment.
#### With CodeSandbox
- Develop on [![CodeSandbox Ready-to-Code](https://img.shields.io/badge/CodeSandbox-Ready--to--Code-blue?logo=codesandbox)](https://codesandbox.io/p/github/xbgmsharp/postgsail/main)
- or via [direct link](https://codesandbox.io/p/github/xbgmsharp/postgsail/main)
#### With DevPod
- [![Open in DevPod!](https://devpod.sh/assets/open-in-devpod.svg)](https://devpod.sh/open#https://github.com/xbgmsharp/postgsail/&workspace=postgsail&provider=docker&ide=openvscode)
- or via [direct link](https://devpod.sh/open#https://github.com/xbgmsharp/postgsail&workspace=postgsail&provider=docker&ide=openvscode)
#### With Docker Dev Environments
- [Open in Docker dev-envs!](https://open.docker.com/dashboard/dev-envs?url=https://github.com/xbgmsharp/postgsail/)
### On-premise (self-hosted)
This kind of deployment needs the [docker application](https://www.docker.com/) to be installed and running. Check this [tutorial](https://www.docker.com/101-tutorial).
Docker run pre packaged application (aka images) which can be retrieved as sources (Dockerfile and resources) to build or already built from registries (private or public).
PostgSail depends heavily on [PostgreSQL](https://www.postgresql.org/). Check this [tutorial](https://www.postgresql.org/docs/current/tutorial.html).
#### pre-deploy configuration
To get these running, copy `.env.example` and rename to `.env` then set the value accordingly.
```bash
# cp .env.example .env
```
```bash
# nano .env
```
Notice, that `PGRST_JWT_SECRET` must be at least 32 characters long.
`$ cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 42 | head -n 1`
`PGSAIL_APP_URL` is the URL you connect to from your browser.
`PGSAIL_API_URL` is the URL where `PGSAIL_APP_URL` connect to.
`PGRST_DB_URI` is the URI where the `PGSAIL_API_URL` connect to.
To summarize:
```mermaid
flowchart LR
subgraph frontend
direction TB
A(PGSAIL_APP_URL)
B(PGSAIL_API_URL)
end
subgraph backend
direction TB
B(PGSAIL_API_URL) -- SQL --> C(PGRST_DB_URI)
end
%% ^ These subgraphs are identical, except for the links to them:
%% Link *to* subgraph1: subgraph1 direction is maintained
User -- HTTP --> A
User -- HTTP --> B
%% Link *within* subgraph2:
%% subgraph2 inherits the direction of the top-level graph (LR)
Boat -- HTTP --> B
```
### Deploy
There is two compose files used. You can update the default settings by editing `docker-compose.yml` and `docker-compose.dev.yml` to your need.
Now let's initialize the database.
#### Step 1. Initialize database
First let's import the SQL schema, execute:
```bash
$ docker compose up db
```
#### Step 2. Start backend (db, api)
Then launch the full backend stack (db, api), execute:
```bash
$ docker compose up db api
```
The API should be accessible via port HTTP/3000.
The database should be accessible via port TCP/5432.
You can connect to the database via a web gui like [pgadmin](https://www.pgadmin.org/) or you can use a client [dbeaver](https://dbeaver.io/).
```bash
$ docker compose -f docker-compose.yml -f docker-compose.dev.yml pgadmin
```
Then connect to the web UI on port HTTP/5050.
#### Step 3. Start frontend (web)
Last build and launch the web frontend, execute:
```bash
docker compose build web
docker compose up web
```
The first step can take some time as it will first run a build to generate the static website based on your settings.
The frontend is a SPA (Single-Page Application). With SPA, the server provides the user with an empty HTML page and Javascript. The latter is where the magic happens. When the browser receives the HTML + Javascript, it loads the Javascript. Once loaded, the JS takes place and, through a set of operations in the DOM, renders the necessary components to the page. The routing is then handled by the browser itself, not hitting the server.
The frontend should be accessible via port HTTP/8080.
Users are collaborating on two installation guide:
- [Self-hosted-installation-guide](https://github.com/xbgmsharp/postgsail/blob/main/docs/install_guide.md)
- [Self-hosted-installation-guide on AWS EC2](https://github.com/xbgmsharp/postgsail/blob/main/docs/Self%E2%80%90hosted-installation-guide%20on%20AWS.md)
- [Self-hosted-installation-guide](https://github.com/xbgmsharp/postgsail/blob/main/docs/Self%E2%80%90hosted-installation-guide.md)
### SQL Configuration
Check and update your postgsail settings via SQL in the table `app_settings`:
```sql
SELECT * FROM app_settings;
```
```sql
UPDATE app_settings
SET
value = 'new_value'
WHERE name = 'app.email_server';
```
As it is all about SQL, [Read more](https://github.com/xbgmsharp/postgsail/blob/main/docs/ERD/README.md) about the database to configure your instance and explore your data.
### Ingest data
Next, to ingest data from signalk, you need to install [signalk-postgsail](https://github.com/xbgmsharp/signalk-postgsail) plugin on your signalk server instance.
Also, if you like, you can import saillogger data using the postgsail helpers, [postgsail-helpers](https://github.com/xbgmsharp/postgsail-helpers).
You might want to import your influxdb1 data as well, [outflux](https://github.com/timescale/outflux).
For InfluxDB 2.x and 3.x. You will need to enable the 1.x APIs to use them. Consult the InfluxDB documentation for more details.
Last, if you like, you can import the sample data from Signalk NMEA Plaka by running the tests.
If everything goes well all tests pass successfully and you should receive a few notifications by email or PushOver or Telegram.
[End-to-End (E2E) Testing.](https://github.com/xbgmsharp/postgsail/blob/main/tests/)
```
$ docker-compose up tests
```
### API Documentation
The OpenAPI description output depends on the permissions of the role that is contained in the JWT role claim.
Other applications can also use the [PostgSAIL API](https://petstore.swagger.io/?url=https://raw.githubusercontent.com/xbgmsharp/postgsail/main/openapi.json).
API anonymous:
```
$ curl http://localhost:3000/
```
API user_role:
```
$ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_login_or_signup_fn'
```
API vessel_role:
```
$ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_register_vessel_fn'
```
#### API main workflow
Check the [End-to-End (E2E) test sample](https://github.com/xbgmsharp/postgsail/blob/main/tests/).
### Docker dependencies
`docker compose` is used to start environment dependencies. Dependencies consist of 3 containers:
- `timescaledb-postgis` alias `db`, PostgreSQL with TimescaleDB extension along with the PostGIS extension.
- `postgrest` alias `api`, Standalone web server that turns your PostgreSQL database directly into a RESTful API.
- `grafana` alias `app`, visualize and monitor your data
### Optional docker images
- [pgAdmin](https://hub.docker.com/r/dpage/pgadmin4), web UI to monitor and manage multiple PostgreSQL
- [Swagger](https://hub.docker.com/r/swaggerapi/swagger-ui), web UI to visualize documentation from PostgREST
```
docker-compose -f docker-compose-optional.yml up
```

View File

@@ -0,0 +1,288 @@
## Self AWS cloud hosted setup example
In this guide we install, setup and run a postgsail project on an AWS instance in the cloud.
## On AWS Console
***Launch an instance on AWS EC2***
With the following settings:
+ Ubuntu
+ Instance type: t2.small
+ Create a new key pair:
+ key pair type: RSA
+ Private key file format: .pem
+ The key file is stored for later use
+ Allow SSH traffic from: Anywhere
+ Allow HTTPS traffic from the internet
+ Allow HTTP traffic from the internet
Configure storage:
The standard storage of 8GiB is too small so change this to 16GiB.
Create a new security group
+ Go to: EC2>Security groups>Create security group
Add inbound rules for the following ports:443, 8080, 80, 3000, 5432, 22, 5050
+ Go to your instance>select your instance>Actions>security>change security group
+ And add the correct security group to the instance.
## Connect to instance with SSH
+ Copy the key file in your default SSH configuration file location (the one VSCode will use)
+ In terminal, go to the folder and run this command to ensure your key is not publicly viewable:
```
chmod 600 "privatekey.pem"
```
We are using VSCode to connect to the instance:
+ Install the Remote - SSH Extension for VSCode
+ Open the Command Palette (Ctrl+Shift+P) and type Remote-SSH: Add New SSH Host:
```
ssh -i "privatekey.pem" ubuntu@ec2-111-22-33-44.eu-west-1.compute.amazonaws.com
```
When prompted, select the default SSH configuration file location.
Open the config file and add the location:
```
xIdentityFile ~/.ssh/privatekey.pem
```
## Install Docker on your instance
To install Docker on your new EC2 Ubuntu instance via SSH, follow these steps:
Update your package list:
```
sudo apt-get update
```
Install required dependencies:
```
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
```
Add Docker's official GPG key:
```
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
Add Docker's official repository:
```
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
Update the package list again:
```
sudo apt-get update
```
Install Docker:
```
sudo apt-get install docker-ce docker-ce-cli containerd.io
```
Verify Docker installation:
```
sudo docker --version
```
Add your user to the docker group to run Docker without sudo:
```
sudo usermod -aG docker ubuntu
```
Then, log out and back in or use the following to apply the changes:
```
newgrp docker
```
## Install Postgsail
Git clone the postgsail repo:
```
git clone https://github.com/xbgmsharp/postgsail.git
```
## Edit environment variables
Copy the example.env file and edit the environment variables:
```
cd postgsail
cp .env.example .env
nano .env
```
+ POSTGRES_USER
Come up with a unique username for the database user. This will be used in the docker image when its started up. Nothing beyond creating a unique username and password is required here.
This environment variable is used in conjunction with `POSTGRES_PASSWORD` to set a user and its password. This variable will create the specified user with superuser power and a database with the same name.
https://github.com/docker-library/docs/blob/master/postgres/README.md
+ POSTGRES_PASSWORD
This should be a good password. It will be used for the postgres user above. Again this is used in the docker image.
This environment variable is required for you to use the PostgreSQL image. It must not be empty or undefined. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the POSTGRES_USER environment variable.
+ POSTGRES_DB
This is the name of the database within postgres. You can leave it named postgres but give it a unique name if you like. The schema will be loaded into this database and all data will be stored within it. Since this is used inside the docker image the name really doesnt matter. If you plan to run additional databases within the image, then you might care.
This environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of `POSTGRES_USER` will be used.
+ PGSAIL_APP_URL
This is the webapp (webui) entrypoint, typically the public DNS or IP
```
PGSAIL_APP_URL=http://localhost:8080
```
+ PGSAIL_API_URL
This is the URL to your API on your instance on port 3000:
```
PGSAIL_API_URL=PGSAIL_API_URL=http://localhost:3000
```
+ PGSAIL_AUTHENTICATOR_PASSWORD
This password is used as part of the database access configuration. Its used as part of the access URI later on. (Put the same password in both lines.)
+ PGSAIL_GRAFANA_PASSWORD
This password is used for the grafana service
+ PGSAIL_GRAFANA_AUTH_PASSWORD
??This password is used for user authentication on grafana?
+ PGSAIL_EMAIL_FROM - PGSAIL_EMAIL_SERVER - PGSAIL_EMAIL_USER - PGSAIL_EMAIL_PASS Pgsail does not include a built in email service - only hooks to send email via an existing server.
We use gmail as a third party email service:
```
PGSAIL_EMAIL_FROM=email@gmail.com
PGSAIL_EMAIL_SERVER=smtp.gmail.com
PGSAIL_EMAIL_USER=email@gmail.com
```
You need to get the PGSAIL_EMAIL_PASS from your gmail account security settings: it is not the account password, instead you need to make an "App password"
+ PGRST_JWT_SECRET
This secret key must be at least 32 characters long, you can create a random key with the following command:
```
cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 42 | head -n 1
```
+ Other ENV variables
```
PGSAIL_PUSHOVER_APP_TOKEN
PGSAIL_PUSHOVER_APP
PGSAIL_TELEGRAM_BOT_TOKEN
PGSAIL_AUTHENTICATOR_PASSWORD=password
PGSAIL_GRAFANA_PASSWORD=password
PGSAIL_GRAFANA_AUTH_PASSWORD=password
#PGSAIL_PUSHOVER_APP_TOKEN= Comment if not used
#PGSAIL_PUSHOVER_APP_URL= Comment if not used
#PGSAIL_TELEGRAM_BOT_TOKEN= Comment if not used
```
## Run the project
If needed, add your user to the docker group to run Docker without sudo:
```
sudo usermod -aG docker ubuntu
```
Then, log out and back in or use the following to apply the changes:
```
newgrp docker
```
Step 1. Import the SQL schema, execute:
```
docker compose up db
```
Step 2. Launch the full backend stack (db, api), execute:
```
docker compose up db api
```
Step 3. Launch the frontend webapp
```
docker compose up web
```
Open browser and navigate to your PGSAIL_APP_URL, you should see the postgsail login screen now:
```
http://ec2-11-234-567-890.eu-west-1.compute.amazonaws.com::8080
```
## Additional database setup
Aditional setup will be required.
There is no useraccount yet, also cronjobs need to be activated.
We'll do that by using pgadmin.
### Run Pgadmin & connect to database
First add two more vars to your env. file:
```
PGADMIN_DEFAULT_EMAIL=setup@setup.com
PGADMIN_DEFAULT_PASSWORD=123456
```
Pgadmin is defined in docker-compose.dev.yml so we need to start the service:
```
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d pgadmin
```
All services should be up now: api, db, web and pgadmin. Check all services running by:
```
docker ps
```
To open Pgadmin, navigate to your aws-url and port 5050:
```
http://ec2-11-234-567-890.eu-west-1.compute.amazonaws.com::5050
```
<p>
You are now able to login with your credentials: PGADMIN_DEFAULT_EMAIL & PGADMIN_DEFAULT_PASSWORD.<br>
</p>
<p>
In the right-side panel you will see "Servers(1)"; by clicking you'll see the Server: "PostgSail dev db"<br>
</p>
<p>
**Warning:** A dialog box will open, prompting to input the password, but stating the wrong username (postgres) , you have to change this username by right-clicking on the server "PostgSail dev db" > Properties > Connection > enter username: POSTGRES_USER > Save
</p>
<p>
Now right-click and Connect to Server and enter your password: POSTGRES_PASSWORD
</p>
<p>
You'll see 2 databases: "postgres" and "signalk"
</p>
### Enabling cron jobs by SQL query
<p>
Cron jobs are not active by default because if you don't have the correct settings set (for SMTP, PushOver, Telegram), you might enter in a loop with errors and you could be blocked or banned from the external services.
</p>
<p>
Once you have setup the services correctly (entered credentials in .env file) you can activate the cron jobs. (We are only using the SMTP email service in this example) in the "postgres" database:
</p>
+ Right-click on "postgres" database and select "Query Tool"
+ Execute the following SQL query:
```
UPDATE cron.job SET active = True;
```
### Adding a user by SQL query
I was not able to create a new user through the web application (still figuring out what is going on). Therefore I added a new user by SQL in the "signalk" database.
+ Right-click on "signalk" database and select "Query Tool"
+ Check the current users in your database executing the query:
```
SELECT * FROM auth.accounts;
```
+ To add a new user executing the query:
```
INSERT INTO auth.accounts (
email, first, last, pass, role) VALUES (
'your.email@domain.com'::citext, 'Test'::text, 'your_username'::text, 'your_password'::text, 'user_role'::name)
returning email;
```
When SMTP is correctly setup, you will receive two emails: "Welcome" and "Email verification".
<p>
You will be able to login with these credentials on the web
</p>
<p>
Each time you login, you will receive an email: "Email verification". This is the OTP process, you can bypass this process by updating the json key value of "Preferences":
</p>
```
UPDATE auth.accounts
SET preferences='{"email_valid": true}'::jsonb || preferences
WHERE email='your.email@domain.com';
```
Now you are able to use PostGSail on the web on your own AWS server!

View File

@@ -0,0 +1,166 @@
# Self hosted setup example environment:
Virtual machine with Ubuntu 22.04 LTS minimal server installation.
Install openssh, update and install docker-ce manually (ubuntu docker repo is lame)
The following ports are exposed to the internet either using a static public IP address or port forwarding via your favorite firewall platform. (not need by default docker will expose all ports to all IPs)
The base install uses ports 5432 (db) and 3000 (api) and 8080 (web).
Well add https using Apache or Nginx proxy once everything is tested. At that point youll want to open 443 or whatever other port you want to use for secure communication.
For docker-ce installation, this is a decent guide to installation:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04
Third party services and options:
Emails
For email notifications you may want to install a local email handler like postfix or use a third party service like gmail.
Pushover
Add more here
Telegram Bot
Add more here
```
$ git clone https://github.com/xbgmsharp/postgsail
cd postgsail
cp .env.example .env
nano .env
```
Login to your docker host once its setup.
Clone the repo to your user directory:
Copy the example file and edit the environment variables
The example has the following:
```
# POSTGRESQL ENV Settings
POSTGRES_USER=username
POSTGRES_PASSWORD=password
POSTGRES_DB=postgres
# PostgSail ENV Settings
PGSAIL_AUTHENTICATOR_PASSWORD=password
PGSAIL_GRAFANA_PASSWORD=password
PGSAIL_GRAFANA_AUTH_PASSWORD=password
# SMTP server settings
PGSAIL_EMAIL_FROM=root@localhost
PGSAIL_EMAIL_SERVER=localhost
#PGSAIL_EMAIL_USER= Comment if not use
#PGSAIL_EMAIL_PASS= Comment if not use
# Pushover settings
#PGSAIL_PUSHOVER_APP_TOKEN= Comment if not use
#PGSAIL_PUSHOVER_APP_URL= Comment if not use
# TELEGRAM BOT, ask BotFather
#PGSAIL_TELEGRAM_BOT_TOKEN= Comment if not use
# webapp entrypoint, typically the public DNS or IP
PGSAIL_APP_URL=http://localhost:8080
# API entrypoint from the webapp, typically the public DNS or IP
PGSAIL_API_URL=http://localhost:3000
#
POSTGREST ENV Settings
PGRST_DB_URI=postgres://authenticator:${PGSAIL_AUTHENTICATOR_PASSWORD}@db:5432/signalk
# % cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 42 | head -n 1
PGRST_JWT_SECRET=_at_least_32__char__long__random
# Grafana ENV Settings
GF_SECURITY_ADMIN_PASSWORD=password
```
All of these need to be configured.
Step by step:
## POSTGRESQL ENV Settings
***POSTGRES_USER***
Come up with a unique username for the database user. This will be used in the docker image when its started up. Nothing beyond creating a unique username and password is required here.
This environment variable is used in conjunction with `POSTGRES_PASSWORD` to set a user and its password. This variable will create the specified user with superuser power and a database with the same name.
https://github.com/docker-library/docs/blob/master/postgres/README.md
***POSTGRES_PASSWORD***
This should be a good password. It will be used for the postgres user above. Again this is used in the docker image.
This environment variable is required for you to use the PostgreSQL image. It must not be empty or undefined. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the POSTGRES_USER environment variable.
***POSTGRES_DB***
This is the name of the database within postgres. Give it a unique name if you like. The schema will be loaded into this database and all data will be stored within it. Since this is used inside the docker image the name really doesnt matter. If you plan to run additional databases within the image, then you might care.
This environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of `POSTGRES_USER` will be used.
```
# PostgSail ENV Settings
PGSAIL_AUTHENTICATOR_PASSWORD=password
PGSAIL_GRAFANA_PASSWORD=password
PGSAIL_GRAFANA_AUTH_PASSWORD=password
PGSAIL_EMAIL_FROM=root@localhost
PGSAIL_EMAIL_SERVER=localhost
#PGSAIL_EMAIL_USER= Comment if not use
#PGSAIL_EMAIL_PASS= Comment if not use
#PGSAIL_PUSHOVER_APP_TOKEN= Comment if not use
#PGSAIL_PUSHOVER_APP_URL= Comment if not use
#PGSAIL_TELEGRAM_BOT_TOKEN= Comment if not use
PGSAIL_APP_URL=http://localhost:8080
PGSAIL_API_URL=http://localhost:3000
```
PGSAIL_AUTHENTICATOR_PASSWORD
This password is used as part of the database access configuration. Its used as part of the access URI later on. (Put the same password in both lines.)
PGSAIL_GRAFANA_PASSWORD
This password is used for the grafana service
PGSAIL_GRAFANA_AUTH_PASSWORD
??This password is used for user authentication on grafana?
PGSAIL_EMAIL_FROM
PGSAIL_EMAIL_SERVER
Pgsail does not include a built in email service - only hooks to send email via an existing server.
You can install an email service on the ubuntu host or use a third party service like gmail. If you chose to use a local service, be aware that some email services will filter it as spam unless youve properly configured it.
PGSAIL_PUSHOVER_APP_TOKEN
PGSAIL_PUSHOVER_APP
PGSAIL_TELEGRAM_BOT_TOKEN
Add more info here
PGSAIL_APP_URL
This is the full url (with domain name or IP) that you access PGSAIL via. Once nginx ssl proxy is added this may need to be updated. (Service restart required after changing?)
PGSAIL_API_URL
This is the API URL thats used for the boat and user access. Once apache or nginx ssl proxy is added this may need to be updated. (same restart?)
Network configuration example:
It is a docker question but in general no special network config should be need, docker created and assign one automatically. all images will be bind to all IPs on the host.
The volume can be on disk or should be a docker volume prefer.
```
# docker compose -f docker-compose.yml -f docker-compose.dev.yml ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
api postgrest/postgrest "/bin/postgrest" api 2 months ago Up 2 months 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:3003->3003/tcp, :::3003->3003/tcp
app grafana/grafana:latest "/run.sh" app 3 months ago Up 12 days 0.0.0.0:3001->3000/tcp, :::3001->3000/tcp
db xbgmsharp/timescaledb-postgis "docker-entrypoint.sh postgres" db 2 months ago Up 2 months (healthy) 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
```
All services (db,api,web) will be accessible via localhost and others IPs, hence the default configuration.
```bash
# telnet localhost 5432
```
and
```bash
# curl localhost:3000
```
```bash
# docker network ls
NETWORK ID NAME DRIVER SCOPE
...
14f30223ebf2 postgsail_default bridge local
```
Volumes:
```bash
% docker volume ls
DRIVER VOLUME NAME
local postgsail_grafana-data
local postgsail_postgres-data
```

84
docs/install_guide.md Normal file
View File

@@ -0,0 +1,84 @@
## Connect to the server
```bash
% ssh root@my.server.com
```
# Clone the git repo
```bash
% git clone https://github.com/xbgmsharp/postgsail
Cloning into 'postgsail'...
...
```
## Edit the configuration
```bash
% cd postgsail
% cp .env.example .env
% cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 42 | head -n 1
..
% nano .env
```
## Install Docker
From https://docs.docker.com/engine/install/ubuntu/
```bash
% apt-get update
...
% apt-get install -y ca-certificates curl
...
% install -m 0755 -d /etc/apt/keyrings
% curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
% chmod a+r /etc/apt/keyrings/docker.asc
% echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
% apt-get update
...
% apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
...
```
## Init the database
```bash
% docker compose up db
...
Gracefully stopping... (press Ctrl+C again to force)
[+] Stopping 1/1
✔ Container db Stopped
```
## Start the db with the api
```bash
% docker compose pull api
...
% docker compose up -d db api
```
## Checks
Making sure it works.
```bash
% telnet localhost 5432
...
telnet> quit
Connection closed.
% curl localhost:3000
...
% docker ps
...
% docker logs api
...
```
# Run the web instance
```bash
% docker compose -f docker-compose.yml -f docker-compose.dev.yml build web (be patient)
...
% docker compose -f docker-compose.yml -f docker-compose.dev.yml up web (be patient)
...
web |
web | ➜ Local: http://localhost:8080/
web | ➜ Network: http://172.18.0.4:8080/
```

File diff suppressed because it is too large Load Diff

View File

@@ -1689,7 +1689,7 @@
},
"timezone": "utc",
"title": "Electrical System",
"uid": "rk0FTiIMk",
"uid": "pgsail_tpl_electrical",
"version": 11,
"weekStart": ""
}

View File

@@ -466,7 +466,7 @@
"timepicker": {},
"timezone": "utc",
"title": "Logbook",
"uid": "E_FUkx9nk",
"uid": "pgsail_tpl_logbook",
"version": 1,
"weekStart": ""
}

View File

@@ -732,7 +732,7 @@
},
"timezone": "utc",
"title": "Monitor",
"uid": "apqDcPjMz",
"uid": "pgsail_tpl_monitor",
"version": 1,
"weekStart": ""
}

View File

@@ -1335,7 +1335,7 @@
},
"timezone": "",
"title": "RPI System",
"uid": "4kxYm6j7k",
"uid": "pgsail_tpl_rpi",
"version": 1,
"weekStart": ""
}

View File

@@ -629,7 +629,7 @@
},
"timezone": "utc",
"title": "Solar System",
"uid": "62bzzlr7z",
"uid": "pgsail_tpl_solar",
"version": 1,
"weekStart": ""
}

View File

@@ -1981,7 +1981,7 @@
},
"timezone": "utc",
"title": "Weather",
"uid": "631a97c2e",
"uid": "pgsail_tpl_weather",
"version": 1,
"weekStart": ""
}

View File

@@ -204,7 +204,7 @@
"editorMode": "code",
"format": "table",
"rawQuery": true,
"rawSql": "SELECT latitude, longitude FROM api.metrics WHERE vessel_id = '${boat}' ORDER BY time ASC LIMIT 1;",
"rawSql": "SELECT latitude, longitude FROM api.metrics WHERE vessel_id = '${boat}' ORDER BY time DESC LIMIT 1;",
"refId": "A",
"sql": {
"columns": [
@@ -291,7 +291,7 @@
},
"timezone": "browser",
"title": "Home",
"uid": "d81aa15b",
"uid": "pgsail_tpl_home",
"version": 1,
"weekStart": ""
}
}

View File

@@ -3,19 +3,22 @@ allow_sign_up = false
auto_assign_org = true
auto_assign_org_role = Editor
[auth.proxy]
enabled = true
header_name = X-WEBAUTH-USER
header_property = email
headers = Login:X-WEBAUTH-LOGIN
auto_sign_up = true
enable_login_token = true
login_maximum_inactive_lifetime_duration = 12h
login_maximum_lifetime_duration = 1d
[dashboards]
default_home_dashboard_path = /etc/grafana/dashboards/home.json
default_home_dashboard_path = /etc/grafana/dashboards/tpl/home.json
min_refresh_interval = 1m
[alerting]
enabled = false
[unified_alerting]
enabled = false
[analytics]
feedback_links_enabled = false
reporting_enabled = false
[news]
news_feed_enabled = false
[help]
enabled = false

View File

@@ -20,6 +20,6 @@ providers:
allowUiUpdates: true
options:
# <string, required> path to dashboard files on disk. Required when using the 'file' type
path: /etc/grafana/dashboards/
path: /etc/grafana/dashboards/tpl/
# <bool> use folder names from filesystem to create folders in Grafana
foldersFromFilesStructure: true

View File

@@ -21,6 +21,8 @@ CREATE TABLE IF NOT EXISTS api.metadata(
plugin_version TEXT NOT NULL,
signalk_version TEXT NOT NULL,
time TIMESTAMPTZ NOT NULL, -- should be rename to last_update !?
platform TEXT NULL,
configuration TEXT NULL,
active BOOLEAN DEFAULT True, -- trigger monitor online/offline
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
@@ -28,18 +30,16 @@ CREATE TABLE IF NOT EXISTS api.metadata(
-- Description
COMMENT ON TABLE
api.metadata
IS 'Stores metadata from vessel';
IS 'Stores metadata received from vessel, aka signalk plugin';
COMMENT ON COLUMN api.metadata.active IS 'trigger monitor online/offline';
-- Index
CREATE INDEX metadata_vessel_id_idx ON api.metadata (vessel_id);
--CREATE INDEX metadata_mmsi_idx ON api.metadata (mmsi);
-- is unused index ?
CREATE INDEX metadata_name_idx ON api.metadata (name);
COMMENT ON COLUMN api.metadata.vessel_id IS 'vessel_id link auth.vessels with api.metadata';
-- Duplicate Indexes
--CREATE INDEX metadata_vessel_id_idx ON api.metadata (vessel_id);
---------------------------------------------------------------------------
-- Metrics from signalk
-- Create vessel status enum
CREATE TYPE status AS ENUM ('sailing', 'motoring', 'moored', 'anchored');
CREATE TYPE status_type AS ENUM ('sailing', 'motoring', 'moored', 'anchored');
-- Table api.metrics
CREATE TABLE IF NOT EXISTS api.metrics (
time TIMESTAMPTZ NOT NULL,
@@ -51,8 +51,8 @@ CREATE TABLE IF NOT EXISTS api.metrics (
courseOverGroundTrue DOUBLE PRECISION NULL,
windSpeedApparent DOUBLE PRECISION NULL,
angleSpeedApparent DOUBLE PRECISION NULL,
status status NULL,
metrics jsonb NULL,
status TEXT NULL,
metrics JSONB NULL,
--CONSTRAINT valid_client_id CHECK (length(client_id) > 10),
--CONSTRAINT valid_latitude CHECK (latitude >= -90 and latitude <= 90),
--CONSTRAINT valid_longitude CHECK (longitude >= -180 and longitude <= 180),
@@ -131,6 +131,8 @@ COMMENT ON COLUMN api.logbook.duration IS 'Best to use standard ISO 8601';
-- Index todo!
CREATE INDEX logbook_vessel_id_idx ON api.logbook (vessel_id);
CREATE INDEX logbook_from_time_idx ON api.logbook (_from_time);
CREATE INDEX logbook_to_time_idx ON api.logbook (_to_time);
CREATE INDEX logbook_from_moorage_id_idx ON api.logbook (_from_moorage_id);
CREATE INDEX logbook_to_moorage_id_idx ON api.logbook (_to_moorage_id);
CREATE INDEX ON api.logbook USING GIST ( track_geom );
@@ -162,6 +164,7 @@ CREATE TABLE IF NOT EXISTS api.stays(
COMMENT ON TABLE
api.stays
IS 'Stores generated stays';
COMMENT ON COLUMN api.stays.duration IS 'Best to use standard ISO 8601';
-- Index
CREATE INDEX stays_vessel_id_idx ON api.stays (vessel_id);
@@ -169,7 +172,6 @@ CREATE INDEX stays_moorage_id_idx ON api.stays (moorage_id);
CREATE INDEX ON api.stays USING GIST ( geog );
COMMENT ON COLUMN api.stays.geog IS 'postgis geography type default SRID 4326 Unit: degres';
-- With other SRID ERROR: Only lon/lat coordinate systems are supported in geography.
COMMENT ON COLUMN api.stays.duration IS 'Best to use standard ISO 8601';
---------------------------------------------------------------------------
-- Moorages
@@ -256,6 +258,8 @@ CREATE FUNCTION metadata_upsert_trigger_fn() RETURNS trigger AS $metadata_upsert
ship_type = NEW.ship_type,
plugin_version = NEW.plugin_version,
signalk_version = NEW.signalk_version,
platform = NEW.platform,
configuration = NEW.configuration,
-- time = NEW.time, ignore the time sent by the vessel as it is out of sync sometimes.
time = NOW(), -- overwrite the time sent by the vessel
active = true
@@ -303,6 +307,22 @@ COMMENT ON FUNCTION
public.metadata_notification_trigger_fn
IS 'process metadata notification from vessel, monitoring_online';
-- FUNCTION Metadata grafana provisioning for new vessel after insert
DROP FUNCTION IF EXISTS metadata_grafana_trigger_fn;
CREATE FUNCTION metadata_grafana_trigger_fn() RETURNS trigger AS $metadata_grafana$
DECLARE
BEGIN
RAISE NOTICE 'metadata_grafana_trigger_fn [%]', NEW;
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('grafana', NEW.id, now(), NEW.vessel_id);
RETURN NULL;
END;
$metadata_grafana$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.metadata_grafana_trigger_fn
IS 'process metadata grafana provisioning from vessel';
---------------------------------------------------------------------------
-- Trigger metadata table
--
@@ -320,7 +340,15 @@ CREATE TRIGGER metadata_notification_trigger AFTER INSERT ON api.metadata
-- Description
COMMENT ON TRIGGER
metadata_notification_trigger ON api.metadata
IS 'AFTER INSERT ON api.metadata run function metadata_update_trigger_fn for notification on new vessel';
IS 'AFTER INSERT ON api.metadata run function metadata_notification_trigger_fn for later notification on new vessel';
-- Metadata trigger AFTER INSERT
CREATE TRIGGER metadata_grafana_trigger AFTER INSERT ON api.metadata
FOR EACH ROW EXECUTE FUNCTION metadata_grafana_trigger_fn();
-- Description
COMMENT ON TRIGGER
metadata_grafana_trigger ON api.metadata
IS 'AFTER INSERT ON api.metadata run function metadata_grafana_trigger_fn for later grafana provisioning on new vessel';
---------------------------------------------------------------------------
-- Trigger Functions metrics table
@@ -430,10 +458,10 @@ CREATE FUNCTION metrics_trigger_fn() RETURNS trigger AS $metrics$
RAISE WARNING 'Metrics Insert first stay as no previous metrics exist, stay_id stay_id [%] [%] [%]', stay_id, NEW.status, NEW.time;
END IF;
-- Check if status is valid enum
SELECT NEW.status::name = any(enum_range(null::status)::name[]) INTO valid_status;
SELECT NEW.status::name = any(enum_range(null::status_type)::name[]) INTO valid_status;
IF valid_status IS False THEN
-- Ignore entry if status is invalid
RAISE WARNING 'Metrics Ignoring metric, invalid status [%]', NEW.status;
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], invalid status [%]', NEW.vessel_id, NEW.status;
RETURN NULL;
END IF;
-- Check if speedOverGround is valid value
@@ -478,7 +506,7 @@ CREATE FUNCTION metrics_trigger_fn() RETURNS trigger AS $metrics$
WHERE id = stay_id;
-- Add stay entry to process queue for further processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_stay', stay_id, now(), current_setting('vessel.id', true));
VALUES ('new_stay', stay_id, NOW(), current_setting('vessel.id', true));
RAISE WARNING 'Metrics Updating Stay end current stay_id [%] [%] [%]', stay_id, NEW.status, NEW.time;
ELSE
RAISE WARNING 'Metrics Invalid stay_id [%] [%]', stay_id, NEW.time;
@@ -529,7 +557,7 @@ CREATE FUNCTION metrics_trigger_fn() RETURNS trigger AS $metrics$
WHERE id = logbook_id;
-- Add logbook entry to process queue for later processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_logbook', logbook_id, now(), current_setting('vessel.id', true));
VALUES ('pre_logbook', logbook_id, NOW(), current_setting('vessel.id', true));
ELSE
RAISE WARNING 'Metrics Invalid logbook_id [%] [%] [%]', logbook_id, NEW.status, NEW.time;
END IF;
@@ -540,7 +568,7 @@ $metrics$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.metrics_trigger_fn
IS 'process metrics from vessel, generate new_logbook and new_stay.';
IS 'process metrics from vessel, generate pre_logbook and new_stay.';
--
-- Triggers logbook update on metrics insert
@@ -603,4 +631,64 @@ CREATE TRIGGER moorage_delete_trigger BEFORE DELETE ON api.moorages
-- Description
COMMENT ON TRIGGER moorage_delete_trigger
ON api.moorages
IS 'Automatic update of name and stay_code on logbook and stays reference';
IS 'Automatic delete logbook and stays reference when delete a moorage';
-- Function process_new on completed logbook
DROP FUNCTION IF EXISTS logbook_completed_trigger_fn;
CREATE FUNCTION logbook_completed_trigger_fn() RETURNS trigger AS $logbook_completed$
DECLARE
BEGIN
RAISE NOTICE 'logbook_completed_trigger_fn [%]', OLD;
RAISE NOTICE 'logbook_completed_trigger_fn [%] [%]', OLD._to_time, NEW._to_time;
-- Add logbook entry to process queue for later processing
--IF ( OLD._to_time <> NEW._to_time ) THEN
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_logbook', NEW.id, NOW(), current_setting('vessel.id', true));
--END IF;
RETURN OLD; -- result is ignored since this is an AFTER trigger
END;
$logbook_completed$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_completed_trigger_fn
IS 'Automatic process_queue for completed logbook._to_time';
-- Triggers logbook completed
--CREATE TRIGGER logbook_completed_trigger AFTER UPDATE ON api.logbook
-- FOR EACH ROW
-- WHEN (OLD._to_time IS DISTINCT FROM NEW._to_time)
-- EXECUTE FUNCTION logbook_completed_trigger_fn();
-- Description
--COMMENT ON TRIGGER logbook_completed_trigger
-- ON api.logbook
-- IS 'Automatic process_queue for completed logbook';
-- Function process_new on completed Stay
DROP FUNCTION IF EXISTS stay_completed_trigger_fn;
CREATE FUNCTION stay_completed_trigger_fn() RETURNS trigger AS $stay_completed$
DECLARE
BEGIN
RAISE NOTICE 'stay_completed_trigger_fn [%]', OLD;
RAISE NOTICE 'stay_completed_trigger_fn [%] [%]', OLD.departed, NEW.departed;
-- Add stay entry to process queue for later processing
--IF ( OLD.departed <> NEW.departed ) THEN
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_stay', NEW.id, NOW(), current_setting('vessel.id', true));
--END IF;
RETURN OLD; -- result is ignored since this is an AFTER trigger
END;
$stay_completed$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.stay_completed_trigger_fn
IS 'Automatic process_queue for completed stay.departed';
-- Triggers stay completed
--CREATE TRIGGER stay_completed_trigger AFTER UPDATE ON api.stays
-- FOR EACH ROW
-- WHEN (OLD.departed IS DISTINCT FROM NEW.departed)
-- EXECUTE FUNCTION stay_completed_trigger_fn();
-- Description
--COMMENT ON TRIGGER stay_completed_trigger
-- ON api.stays
-- IS 'Automatic process_queue for completed stay';

View File

@@ -7,6 +7,12 @@
--
---------------------------------------------------------------------------
-- PostgREST Media Type Handlers
CREATE DOMAIN "text/xml" AS xml;
CREATE DOMAIN "application/geo+json" AS jsonb;
CREATE DOMAIN "application/gpx+xml" AS xml;
CREATE DOMAIN "application/vnd.google-earth.kml+xml" AS xml;
---------------------------------------------------------------------------
-- Functions API schema
-- Timelapse - replay logs
@@ -41,7 +47,7 @@ CREATE OR REPLACE FUNCTION api.timelapse_fn(
WITH logbook as (
SELECT track_geom
FROM api.logbook
WHERE _from_time >= start_log::TIMESTAMPTZ
WHERE _from_time >= start_date::TIMESTAMPTZ
AND _to_time <= end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
AND track_geom IS NOT NULL
ORDER BY _from_time ASC
@@ -69,7 +75,7 @@ CREATE OR REPLACE FUNCTION api.timelapse_fn(
-- Return a GeoJSON MultiLineString
-- result _geojson [null, null]
--raise WARNING 'result _geojson %' , _geojson;
SELECT json_build_object(
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', ARRAY[_geojson] ) INTO geojson;
END;
@@ -79,6 +85,75 @@ COMMENT ON FUNCTION
api.timelapse_fn
IS 'Export all selected logs geometry `track_geom` to a geojson as MultiLineString with empty properties';
DROP FUNCTION IF EXISTS api.timelapse2_fn;
CREATE OR REPLACE FUNCTION api.timelapse2_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL,
IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL,
OUT geojson JSONB) RETURNS JSONB AS $timelapse2$
DECLARE
_geojson jsonb;
BEGIN
-- Using sub query to force id order by
-- Merge GIS track_geom into a GeoJSON Points
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', jsonb_build_object( 'notes', f->'properties'->>'notes'),
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'Point'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE id >= start_log
AND id <= end_log
AND track_geojson IS NOT NULL
ORDER BY _from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
ELSIF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', jsonb_build_object( 'notes', f->'properties'->>'notes'),
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'Point'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE _from_time >= start_date::TIMESTAMPTZ
AND _to_time <= end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
AND track_geojson IS NOT NULL
ORDER BY _from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
ELSE
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', jsonb_build_object( 'notes', f->'properties'->>'notes'),
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'Point'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE track_geojson IS NOT NULL
ORDER BY _from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
END IF;
-- Return a GeoJSON MultiLineString
-- result _geojson [null, null]
raise WARNING 'result _geojson %' , _geojson;
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', _geojson ) INTO geojson;
END;
$timelapse2$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.timelapse2_fn
IS 'Export all selected logs geometry `track_geom` to a geojson as points with notes properties';
-- export_logbook_geojson_fn
DROP FUNCTION IF EXISTS api.export_logbook_geojson_fn;
CREATE FUNCTION api.export_logbook_geojson_fn(IN _id integer, OUT geojson JSONB) RETURNS JSONB AS $export_logbook_geojson$
@@ -111,7 +186,7 @@ COMMENT ON FUNCTION
-- https://opencpn.org/OpenCPN/info/gpxvalidation.html
--
DROP FUNCTION IF EXISTS api.export_logbook_gpx_fn;
CREATE OR REPLACE FUNCTION api.export_logbook_gpx_fn(IN _id INTEGER) RETURNS pg_catalog.xml
CREATE OR REPLACE FUNCTION api.export_logbook_gpx_fn(IN _id INTEGER) RETURNS "text/xml"
AS $export_logbook_gpx$
DECLARE
app_settings jsonb;
@@ -169,7 +244,7 @@ COMMENT ON FUNCTION
-- https://developers.google.com/kml/documentation/kml_tut
-- TODO https://developers.google.com/kml/documentation/time#timespans
DROP FUNCTION IF EXISTS api.export_logbook_kml_fn;
CREATE OR REPLACE FUNCTION api.export_logbook_kml_fn(IN _id INTEGER) RETURNS pg_catalog.xml
CREATE OR REPLACE FUNCTION api.export_logbook_kml_fn(IN _id INTEGER) RETURNS "text/xml"
AS $export_logbook_kml$
DECLARE
logbook_rec record;
@@ -212,7 +287,7 @@ COMMENT ON FUNCTION
DROP FUNCTION IF EXISTS api.export_logbooks_gpx_fn;
CREATE OR REPLACE FUNCTION api.export_logbooks_gpx_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL) RETURNS pg_catalog.xml
IN end_log INTEGER DEFAULT NULL) RETURNS "application/gpx+xml"
AS $export_logbooks_gpx$
declare
merged_jsonb jsonb;
@@ -276,7 +351,7 @@ COMMENT ON FUNCTION
DROP FUNCTION IF EXISTS api.export_logbooks_kml_fn;
CREATE OR REPLACE FUNCTION api.export_logbooks_kml_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL) RETURNS pg_catalog.xml
IN end_log INTEGER DEFAULT NULL) RETURNS "text/xml"
AS $export_logbooks_kml$
DECLARE
_geom geometry;
@@ -334,7 +409,7 @@ COMMENT ON FUNCTION
-- Find all log from and to moorage geopoint within 100m
DROP FUNCTION IF EXISTS api.find_log_from_moorage_fn;
CREATE OR REPLACE FUNCTION api.find_log_from_moorage_fn(IN _id INTEGER, OUT geojson JSON) RETURNS JSON AS $find_log_from_moorage$
CREATE OR REPLACE FUNCTION api.find_log_from_moorage_fn(IN _id INTEGER, OUT geojson JSONB) RETURNS JSONB AS $find_log_from_moorage$
DECLARE
moorage_rec record;
_geojson jsonb;
@@ -357,7 +432,7 @@ CREATE OR REPLACE FUNCTION api.find_log_from_moorage_fn(IN _id INTEGER, OUT geoj
1000 -- in meters ?
);
-- Return a GeoJSON filter on LineString
SELECT json_build_object(
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', public.geojson_py_fn(_geojson, 'Point'::TEXT) ) INTO geojson;
END;
@@ -368,7 +443,7 @@ COMMENT ON FUNCTION
IS 'Find all log from moorage geopoint within 100m';
DROP FUNCTION IF EXISTS api.find_log_to_moorage_fn;
CREATE OR REPLACE FUNCTION api.find_log_to_moorage_fn(IN _id INTEGER, OUT geojson JSON) RETURNS JSON AS $find_log_to_moorage$
CREATE OR REPLACE FUNCTION api.find_log_to_moorage_fn(IN _id INTEGER, OUT geojson JSONB) RETURNS JSONB AS $find_log_to_moorage$
DECLARE
moorage_rec record;
_geojson jsonb;
@@ -391,7 +466,7 @@ CREATE OR REPLACE FUNCTION api.find_log_to_moorage_fn(IN _id INTEGER, OUT geojso
1000 -- in meters ?
);
-- Return a GeoJSON filter on LineString
SELECT json_build_object(
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', public.geojson_py_fn(_geojson, 'Point'::TEXT) ) INTO geojson;
END;
@@ -529,7 +604,7 @@ DROP FUNCTION IF EXISTS api.export_moorages_geojson_fn;
CREATE FUNCTION api.export_moorages_geojson_fn(OUT geojson JSONB) RETURNS JSONB AS $export_moorages_geojson$
DECLARE
BEGIN
SELECT json_build_object(
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features',
( SELECT
@@ -552,7 +627,7 @@ COMMENT ON FUNCTION
IS 'Export moorages as geojson';
DROP FUNCTION IF EXISTS api.export_moorages_gpx_fn;
CREATE FUNCTION api.export_moorages_gpx_fn() RETURNS pg_catalog.xml AS $export_moorages_gpx$
CREATE FUNCTION api.export_moorages_gpx_fn() RETURNS "text/xml" AS $export_moorages_gpx$
DECLARE
app_settings jsonb;
BEGIN
@@ -582,8 +657,8 @@ CREATE FUNCTION api.export_moorages_gpx_fn() RETURNS pg_catalog.xml AS $export_m
xmlelement(name type, 'WPT'),
xmlelement(name link, xmlattributes(concat(app_settings->>'app.url','moorage/', m.id) as href),
xmlelement(name text, m.name)),
xmlelement(name extensions, xmlelement(name "postgsail:mooorage_id", 1),
xmlelement(name "postgsail:link", concat(app_settings->>'app.url','moorage/', m.id)),
xmlelement(name extensions, xmlelement(name "postgsail:mooorage_id", m.id),
xmlelement(name "postgsail:link", concat(app_settings->>'app.url','/moorage/', m.id)),
xmlelement(name "opencpn:guid", uuid_generate_v4()),
xmlelement(name "opencpn:viz", '1'),
xmlelement(name "opencpn:scale_min_max", xmlattributes(true as UseScale, 30000 as ScaleMin, 0 as ScaleMax)
@@ -604,7 +679,7 @@ DROP FUNCTION IF EXISTS api.stats_logs_fn;
CREATE OR REPLACE FUNCTION api.stats_logs_fn(
IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL,
OUT stats JSON) RETURNS JSON AS $stats_logs$
OUT stats JSONB) RETURNS JSONB AS $stats_logs$
DECLARE
_start_date TIMESTAMPTZ DEFAULT '1970-01-01';
_end_date TIMESTAMPTZ DEFAULT NOW();
@@ -748,6 +823,12 @@ CREATE OR REPLACE FUNCTION api.delete_logbook_fn(IN _id integer) RETURNS BOOLEAN
SET notes = 'mark for deletion'
WHERE l.vessel_id = current_setting('vessel.id', false)
AND id = logbook_rec.id;
-- Update metrics status to moored
UPDATE api.metrics
SET status = 'moored'
WHERE time >= logbook_rec._from_time::TIMESTAMPTZ
AND time <= logbook_rec._to_time::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false);
-- Get related stays
SELECT id,departed,active INTO current_stays_id,current_stays_departed,current_stays_active
FROM api.stays s
@@ -776,11 +857,82 @@ CREATE OR REPLACE FUNCTION api.delete_logbook_fn(IN _id integer) RETURNS BOOLEAN
RAISE WARNING '-> delete_logbook_fn delete logbook [%]', logbook_rec.id;
DELETE FROM api.stays WHERE id = current_stays_id;
RAISE WARNING '-> delete_logbook_fn delete stays [%]', current_stays_id;
-- TODO should we subtract (-1) moorages ref count or reprocess it?!?
-- Clean up, Subtract (-1) moorages ref count
UPDATE api.moorages
SET reference_count = reference_count - 1
WHERE vessel_id = current_setting('vessel.id', false)
AND id = previous_stays_id;
RETURN TRUE;
END;
$delete_logbook$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.delete_logbook_fn
IS 'Delete a logbook and dependency stay';
IS 'Delete a logbook and dependency stay';
CREATE OR REPLACE FUNCTION api.monitoring_history_fn(IN time_interval TEXT DEFAULT '24', OUT history_metrics JSONB) RETURNS JSONB AS $monitoring_history$
DECLARE
bucket_interval interval := '5 minutes';
BEGIN
RAISE NOTICE '-> monitoring_history_fn';
SELECT CASE time_interval
WHEN '24' THEN '5 minutes'
WHEN '48' THEN '2 hours'
WHEN '72' THEN '4 hours'
WHEN '168' THEN '7 hours'
ELSE '5 minutes'
END bucket INTO bucket_interval;
RAISE NOTICE '-> monitoring_history_fn % %', time_interval, bucket_interval;
WITH history_table AS (
SELECT time_bucket(bucket_interval::INTERVAL, time) AS time_bucket,
avg((metrics->'environment.water.temperature')::numeric) AS waterTemperature,
avg((metrics->'environment.inside.temperature')::numeric) AS insideTemperature,
avg((metrics->'environment.outside.temperature')::numeric) AS outsideTemperature,
avg((metrics->'environment.wind.speedOverGround')::numeric) AS windSpeedOverGround,
avg((metrics->'environment.inside.relativeHumidity')::numeric) AS insideHumidity,
avg((metrics->'environment.outside.relativeHumidity')::numeric) AS outsideHumidity,
avg((metrics->'environment.outside.pressure')::numeric) AS outsidePressure,
avg((metrics->'environment.inside.pressure')::numeric) AS insidePressure,
avg((metrics->'electrical.batteries.House.capacity.stateOfCharge')::numeric) AS batteryCharge,
avg((metrics->'electrical.batteries.House.voltage')::numeric) AS batteryVoltage,
avg((metrics->'environment.depth.belowTransducer')::numeric) AS depth
FROM api.metrics
WHERE time > (NOW() AT TIME ZONE 'UTC' - INTERVAL '1 hours' * time_interval::NUMERIC)
GROUP BY time_bucket
ORDER BY time_bucket asc
)
SELECT jsonb_agg(history_table) INTO history_metrics FROM history_table;
END
$monitoring_history$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.monitoring_history_fn
IS 'Export metrics from a time period 24h, 48h, 72h, 7d';
CREATE OR REPLACE FUNCTION api.status_fn(out status jsonb) RETURNS JSONB AS $status_fn$
DECLARE
in_route BOOLEAN := False;
BEGIN
RAISE NOTICE '-> status_fn';
SELECT EXISTS ( SELECT id
FROM api.logbook l
WHERE active IS True
LIMIT 1
) INTO in_route;
IF in_route IS True THEN
-- In route from <logbook.from_name> arrived at <>
SELECT jsonb_build_object('status', sa.description, 'location', m.name, 'departed', l._from_time) INTO status
from api.logbook l, api.stays_at sa, api.moorages m
where s.stay_code = sa.stay_code AND l._from_moorage_id = m.id AND l.active IS True;
ELSE
-- At <Stat_at.Desc> in <Moorage.name> departed at <>
SELECT jsonb_build_object('status', sa.description, 'location', m.name, 'arrived', s.arrived) INTO status
from api.stays s, api.stays_at sa, api.moorages m
where s.stay_code = sa.stay_code AND s.moorage_id = m.id AND s.active IS True;
END IF;
END
$status_fn$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.status_fn
IS 'generate vessel status';

View File

@@ -15,12 +15,12 @@
-- security_invoker=true,security_barrier=true
---------------------------------------------------------------------------
CREATE VIEW first_metric AS
CREATE VIEW public.first_metric AS
SELECT *
FROM api.metrics
ORDER BY time ASC LIMIT 1;
CREATE VIEW last_metric AS
CREATE VIEW public.last_metric AS
SELECT *
FROM api.metrics
ORDER BY time DESC LIMIT 1;
@@ -392,9 +392,13 @@ CREATE VIEW api.monitoring_view WITH (security_invoker=true,security_barrier=tru
'properties', jsonb_build_object(
'name', current_setting('vessel.name', false),
'latitude', m.latitude,
'longitude', m.longitude
'longitude', m.longitude,
'time', m.time,
'speedoverground', m.speedoverground,
'windspeedapparent', m.windspeedapparent
)::jsonb ) AS geojson,
current_setting('vessel.name', false) AS name
--( SELECT api.status_fn() ) AS status
FROM api.metrics m
ORDER BY time DESC LIMIT 1;
COMMENT ON VIEW

View File

@@ -8,6 +8,36 @@ select current_database();
-- connect to the DB
\c signalk
-- Check for new logbook pending validation
CREATE FUNCTION cron_process_pre_logbook_fn() RETURNS void AS $$
DECLARE
process_rec record;
BEGIN
-- Check for new logbook pending update
RAISE NOTICE 'cron_process_pre_logbook_fn init loop';
FOR process_rec in
SELECT * FROM process_queue
WHERE channel = 'pre_logbook' AND processed IS NULL
ORDER BY stored ASC LIMIT 100
LOOP
RAISE NOTICE 'cron_process_pre_logbook_fn processing queue [%] for logbook id [%]', process_rec.id, process_rec.payload;
-- update logbook
PERFORM process_pre_logbook_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE 'cron_process_pre_logbook_fn processed queue [%] for logbook id [%]', process_rec.id, process_rec.payload;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_pre_logbook_fn
IS 'init by pg_cron to check for new logbook pending update, if so perform process_logbook_valid_fn';
-- Check for new logbook pending update
CREATE FUNCTION cron_process_new_logbook_fn() RETURNS void AS $$
declare
@@ -94,7 +124,7 @@ $$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_moorage_fn
IS 'init by pg_cron to check for new moorage pending update, if so perform process_moorage_queue_fn';
IS 'Deprecated, init by pg_cron to check for new moorage pending update, if so perform process_moorage_queue_fn';
-- CRON Monitor offline pending notification
create function cron_process_monitor_offline_fn() RETURNS void AS $$
@@ -329,6 +359,144 @@ COMMENT ON FUNCTION
public.cron_process_new_notification_fn
IS 'init by pg_cron to check for new event pending notifications, if so perform process_notification_queue_fn';
-- CRON for new vessel metadata pending grafana provisioning
CREATE FUNCTION cron_process_grafana_fn() RETURNS void AS $$
DECLARE
process_rec record;
data_rec record;
app_settings jsonb;
user_settings jsonb;
BEGIN
-- We run grafana provisioning only after the first received vessel metadata
-- Check for new vessel metadata pending grafana provisioning
RAISE NOTICE 'cron_process_grafana_fn';
FOR process_rec in
SELECT * from process_queue
where channel = 'grafana' and processed is null
order by stored asc
LOOP
RAISE NOTICE '-> cron_process_grafana_fn [%]', process_rec.payload;
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Get vessel details base on metadata id
SELECT * INTO data_rec
FROM api.metadata m, auth.vessels v
WHERE m.id = process_rec.payload::INTEGER
AND m.vessel_id = v.vessel_id;
-- as we got data from the vessel we can do the grafana provisioning.
PERFORM grafana_py_fn(data_rec.name, data_rec.vessel_id, data_rec.owner_email, app_settings);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(data_rec.vessel_id::TEXT);
--RAISE DEBUG '-> DEBUG cron_process_grafana_fn get_user_settings_from_vesselid_fn [%]', user_settings;
-- add user in keycloak
PERFORM keycloak_auth_py_fn(data_rec.vessel_id, user_settings, app_settings);
-- Send notification
PERFORM send_notification_fn('grafana'::TEXT, user_settings::JSONB);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_grafana_fn updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_grafana_fn
IS 'init by pg_cron to check for new vessel pending grafana provisioning, if so perform grafana_py_fn';
CREATE OR REPLACE FUNCTION public.cron_process_windy_fn() RETURNS void AS $$
DECLARE
windy_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ;
metric_rec record;
windy_metric jsonb;
app_settings jsonb;
user_settings jsonb;
windy_pws jsonb;
BEGIN
-- Check for new observations pending update
RAISE NOTICE 'cron_windy_fn';
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Find users with Windy active and with an active vessel
-- Map account id to Windy Station ID
FOR windy_rec in
SELECT
a.id,a.email,v.vessel_id,v.name,
COALESCE((a.preferences->'windy_last_metric')::TEXT, default_last_metric::TEXT) as last_metric
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'public_windy')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_windy_fn for [%]', windy_rec;
PERFORM set_config('vessel.id', windy_rec.vessel_id, false);
--RAISE WARNING 'public.cron_process_windy_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(windy_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_windy_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last windy_last_metric avg by 5 minutes
-- TODO json_agg to send all data in once, but issue with py jsonb transformation decimal.
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.outside.temperature')::numeric) AS temperature,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.outside.relativeHumidity')::numeric) AS rh,
avg((m.metrics->'environment.wind.directionTrue')::numeric) AS winddir,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
max((m.metrics->'environment.wind.speedTrue')::numeric) AS gust,
last(latitude, time) AS lat,
last(longitude, time) AS lng
FROM api.metrics m
WHERE vessel_id = windy_rec.vessel_id
AND m.time >= windy_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_windy_fn checking metrics [%]', metric_rec;
-- https://community.windy.com/topic/8168/report-your-weather-station-data-to-windy
-- temp from kelvin to celcuis
-- winddir from radiant to degres
-- rh from ratio to percentage
SELECT jsonb_build_object(
'dateutc', metric_rec.time_bucket,
'station', windy_rec.id,
'name', windy_rec.name,
'lat', metric_rec.lat,
'lon', metric_rec.lng,
'wind', metric_rec.wind,
'gust', metric_rec.gust,
'pressure', metric_rec.pressure,
'winddir', radiantToDegrees(metric_rec.winddir::numeric),
'temp', kelvinToCel(metric_rec.temperature::numeric),
'rh', valToPercent(metric_rec.rh::numeric)
) INTO windy_metric;
RAISE NOTICE '-> cron_windy_fn checking windy_metrics [%]', windy_metric;
SELECT windy_pws_py_fn(windy_metric, user_settings, app_settings) into windy_pws;
RAISE NOTICE '-> cron_windy_fn Windy PWS [%]', ((windy_pws->'header')::JSONB ? 'id');
IF NOT((user_settings->'settings')::JSONB ? 'windy') and ((windy_pws->'header')::JSONB ? 'id') then
RAISE NOTICE '-> cron_windy_fn new Windy PWS [%]', (windy_pws->'header')::JSONB->>'id';
-- Send metrics to Windy
PERFORM api.update_user_preferences_fn('{windy}'::TEXT, ((windy_pws->'header')::JSONB->>'id')::TEXT);
-- Send notification
PERFORM send_notification_fn('windy'::TEXT, user_settings::JSONB);
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{windy_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_windy_fn
IS 'init by pg_cron to create (or update) station and uploading observations to Windy Personal Weather Station observations';
-- CRON for Vacuum database
CREATE FUNCTION cron_vacuum_fn() RETURNS void AS $$
-- ERROR: VACUUM cannot be executed from a function
@@ -349,27 +517,305 @@ COMMENT ON FUNCTION
IS 'init by pg_cron to full vacuum tables on schema api';
-- CRON for alerts notification
CREATE FUNCTION cron_process_alerts_fn() RETURNS void AS $$
CREATE OR REPLACE FUNCTION public.cron_alerts_fn() RETURNS void AS $$
DECLARE
alert_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ;
metric_rec record;
app_settings JSONB;
user_settings JSONB;
alerting JSONB;
_alarms JSONB;
alarms TEXT;
alert_default JSONB := '{
"low_pressure_threshold": 990,
"high_wind_speed_threshold": 30,
"low_water_depth_threshold": 1,
"min_notification_interval": 6,
"high_pressure_drop_threshold": 12,
"low_battery_charge_threshold": 90,
"low_battery_voltage_threshold": 12.5,
"low_water_temperature_threshold": 10,
"low_indoor_temperature_threshold": 7,
"low_outdoor_temperature_threshold": 3
}';
BEGIN
-- Check for new event notification pending update
RAISE NOTICE 'cron_process_alerts_fn';
RAISE NOTICE 'cron_alerts_fn';
FOR alert_rec in
SELECT
a.user_id,a.email,v.vessel_id
FROM auth.accounts a, auth.vessels v, api.metadata m
WHERE m.vessel_id = v.vessel_id
AND a.email = v.owner_email
AND (preferences->'alerting'->'enabled')::boolean = false
LOOP
RAISE NOTICE '-> cron_process_alert_rec_fn for [%]', alert_rec;
a.user_id,a.email,v.vessel_id,
COALESCE((a.preferences->'alert_last_metric')::TEXT, default_last_metric::TEXT) as last_metric,
(alert_default || (a.preferences->'alerting')::JSONB) as alerting,
(a.preferences->'alarms')::JSONB as alarms
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'alerting'->'enabled')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_alerts_fn for [%]', alert_rec;
PERFORM set_config('vessel.id', alert_rec.vessel_id, false);
PERFORM set_config('user.email', alert_rec.email, false);
--RAISE WARNING 'public.cron_process_alert_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(alert_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_alerts_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last last_metric avg by 5 minutes
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.inside.temperature')::numeric) AS intemp,
avg((m.metrics->'environment.outside.temperature')::numeric) AS outtemp,
avg((m.metrics->'environment.water.temperature')::numeric) AS wattemp,
avg((m.metrics->'environment.depth.belowTransducer')::numeric) AS watdepth,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
avg((m.metrics->'electrical.batteries.House.voltage')::numeric) AS voltage,
avg((m.metrics->'electrical.batteries.House.capacity.stateOfCharge')::numeric) AS charge
FROM api.metrics m
WHERE vessel_id = alert_rec.vessel_id
AND m.time >= alert_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_alerts_fn checking metrics [%]', metric_rec;
RAISE NOTICE '-> cron_alerts_fn checking alerting [%]', alert_rec.alerting;
--RAISE NOTICE '-> cron_alerts_fn checking debug [%] [%]', kelvinToCel(metric_rec.intemp), (alert_rec.alerting->'low_indoor_temperature_threshold');
IF kelvinToCel(metric_rec.intemp) < (alert_rec.alerting->'low_indoor_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_indoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_indoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.intemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.intemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold';
END IF;
IF kelvinToCel(metric_rec.outtemp) < (alert_rec.alerting->'low_outdoor_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_outdoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_outdoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.outtemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.outtemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold';
END IF;
IF kelvinToCel(metric_rec.wattemp) < (alert_rec.alerting->'low_water_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.wattemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_temperature_threshold value:'|| kelvinToCel(metric_rec.wattemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold';
END IF;
IF metric_rec.watdepth < (alert_rec.alerting->'low_water_depth_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_depth_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_depth_threshold": {"value": '|| metric_rec.watdepth ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_depth_threshold value:'|| metric_rec.watdepth ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold';
END IF;
if metric_rec.pressure < (alert_rec.alerting->'high_pressure_drop_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_pressure_drop_threshold'->>'date') IS NULL) OR
(((_alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_pressure_drop_threshold": {"value": '|| metric_rec.pressure ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_pressure_drop_threshold value:'|| metric_rec.pressure ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold';
END IF;
IF metric_rec.wind > (alert_rec.alerting->'high_wind_speed_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_wind_speed_threshold'->>'date') IS NULL) OR
(((_alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_wind_speed_threshold": {"value": '|| metric_rec.wind ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_wind_speed_threshold value:'|| metric_rec.wind ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold';
END IF;
if metric_rec.voltage < (alert_rec.alerting->'low_battery_voltage_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = 'lacroix.francois@gmail.com';
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_voltage_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_voltage_threshold": {"value": '|| metric_rec.voltage ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_voltage_threshold value:'|| metric_rec.voltage ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold';
END IF;
if (metric_rec.charge*100) < (alert_rec.alerting->'low_battery_charge_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_charge_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_charge_threshold": {"value": '|| (metric_rec.charge*100) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_charge_threshold value:'|| (metric_rec.charge*100) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold';
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{alert_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_alerts_fn
public.cron_alerts_fn
IS 'init by pg_cron to check for alerts';
-- CRON for no vessel notification
@@ -437,7 +883,7 @@ DECLARE
no_activity_rec record;
user_settings jsonb;
BEGIN
-- Check for vessel with no activity for more than 200 days
-- Check for vessel with no activity for more than 230 days
RAISE NOTICE 'cron_process_no_activity_fn';
FOR no_activity_rec in
SELECT
@@ -445,7 +891,7 @@ BEGIN
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '200 DAYS'
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '230 DAYS'
LOOP
RAISE NOTICE '-> cron_process_no_activity_rec_fn for [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
@@ -458,7 +904,7 @@ $no_activity$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_no_activity_fn
IS 'init by pg_cron, check for vessel with no activity for more than 200 days then send notification';
IS 'init by pg_cron, check for vessel with no activity for more than 230 days then send notification';
-- CRON for deactivated/deletion
CREATE FUNCTION cron_process_deactivated_fn() RETURNS void AS $deactivated$
@@ -531,7 +977,7 @@ COMMENT ON FUNCTION
-- Need to be in the postgres database.
\c postgres
-- CRON for clean up job details logs
CREATE FUNCTION job_run_details_cleanup_fn() RETURNS void AS $$
CREATE FUNCTION public.job_run_details_cleanup_fn() RETURNS void AS $$
DECLARE
BEGIN
-- Remove job run log older than 3 months

View File

@@ -105,34 +105,49 @@ INSERT INTO public.email_templates VALUES
E'You requested a password recovery. Check your email!\n'),
('telegram_otp',
'Telegram bot',
E'Hello,\nTo connect your account to a @postgsail_bot. Please type this verification code __OTP_CODE__ back to the bot.\nThe code is valid 15 minutes.\nThe PostgSail Team',
E'Hello,\nTo connect your account to a @postgsail_bot. Please type this verification code __OTP_CODE__ back to the bot.\nThe code is valid 15 minutes.\nFrancois',
'Telegram bot',
E'Hello,\nTo connect your account to a @postgsail_bot. Check your email!\n'),
('telegram_valid',
'Telegram bot',
E'Hello __RECIPIENT__,\nCongratulations! You have just connect your account to your vessel, @postgsail_bot.\n\nThe PostgSail Team',
E'Hello __RECIPIENT__,\nCongratulations! You have just connect your account to your vessel, @postgsail_bot.\nFrancois',
'Telegram bot!',
E'Congratulations!\nYou have just connect your account to your vessel, @postgsail_bot.\n'),
('no_vessel',
'PostgSail add your boat',
E'Hello __RECIPIENT__,\nYou have created an account on PostgSail but you have not created your boat yet.\nIf you need any assistance I would be happy to help. It is free and an open-source.\nThe PostgSail Team',
E'Hello __RECIPIENT__,\nYou created an account on PostgSail but you have not added your boat yet.\nIf you need any assistance, I would be happy to help. It is free and an open-source.\nFrancois',
'PostgSail next step',
E'Hello,\nYou should create your vessel. Check your email!\n'),
('no_metadata',
'PostgSail connect your boat',
E'Hello __RECIPIENT__,\nYou have created an account on PostgSail but you have not connected your boat yet.\nIf you need any assistance I would be happy to help. It is free and an open-source.\nThe PostgSail Team',
E'Hello __RECIPIENT__,\nYou created an account on PostgSail but you have not connected your boat yet.\nIf you need any assistance, I would be happy to help. It is free and an open-source.\nFrancois',
'PostgSail next step',
E'Hello,\nYou should connect your vessel. Check your email!\n'),
('no_activity',
'PostgSail boat inactivity',
E'Hello __RECIPIENT__,\nWe don\'t see any activity on your account, do you need any assistance?\nIf you need any assistance I would be happy to help. It is free and an open-source.\nThe PostgSail Team',
E'Hello __RECIPIENT__,\nWe don\'t see any activity on your account, do you need any assistance?\nIf you need any assistance, I would be happy to help. It is free and an open-source.\nFrancois.',
'PostgSail inactivity!',
E'We detected inactivity. Check your email!\n'),
('deactivated',
'PostgSail account deactivated',
E'Hello __RECIPIENT__,\nYour account has been deactivated and all your data has been removed from PostgSail system.',
'PostgSail deactivated!',
E'We removed your account. Check your email!\n');
E'We removed your account. Check your email!\n'),
('grafana',
'PostgSail Grafana integration',
E'Hello __RECIPIENT__,\nCongratulations! You unlocked Grafana dashboard.\nSee more details at https://app.openplotter.cloud\nHappy sailing!\nFrancois',
'PostgSail Grafana!',
E'Congratulations!\nYou unlocked Grafana dashboard.\nSee more details at https://app.openplotter.cloud\n'),
('windy',
'PostgSail Windy Weather station',
E'Hello __RECIPIENT__,\nCongratulations! Your boat is now a Windy Weather station.\nSee more details at __APP_URL__/windy\nHappy sailing!\nFrancois',
'PostgSail Windy!',
E'Congratulations!\nYour boat is now a Windy Weather station.\nSee more details at __APP_URL__/windy\n'),
('alert',
'PostgSail Alert',
E'Hello __RECIPIENT__,\nWe detected an alert __ALERT__.\nSee more details at __APP_URL__\nStay safe.\nFrancois',
'PostgSail Alert!',
E'We detected an alert __ALERT__.\n');
---------------------------------------------------------------------------
-- Queue handling
@@ -178,7 +193,10 @@ $new_account_entry$ language plpgsql;
create function new_account_otp_validation_entry_fn() returns trigger as $new_account_otp_validation_entry$
begin
insert into process_queue (channel, payload, stored, ref_id) values ('email_otp', NEW.email, now(), NEW.user_id);
-- Add email_otp check only if not from oauth server
if (NEW.preferences->>'email_verified')::boolean IS NOT True then
insert into process_queue (channel, payload, stored, ref_id) values ('email_otp', NEW.email, now(), NEW.user_id);
end if;
return NEW;
END;
$new_account_otp_validation_entry$ language plpgsql;

View File

@@ -14,7 +14,6 @@ CREATE SCHEMA IF NOT EXISTS public;
-- Functions public schema
-- process single cron event, process_[logbook|stay|moorage]_queue_fn()
--
CREATE OR REPLACE FUNCTION public.logbook_metrics_dwithin_fn(
IN _start text,
IN _end text,
@@ -40,7 +39,7 @@ $logbook_metrics_dwithin$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_metrics_dwithin_fn
IS 'Check if all entries for a logbook are in stationary movement with 15 meters';
IS 'Check if all entries for a logbook are in stationary movement with 50 meters';
-- Update a logbook with avg data
-- TODO using timescale function
@@ -145,8 +144,9 @@ CREATE FUNCTION public.logbook_update_geojson_fn(IN _id integer, IN _start text,
time,
courseovergroundtrue,
speedoverground,
anglespeedapparent,
windspeedapparent,
longitude,latitude,
'' AS notes,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
@@ -380,16 +380,7 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
log_settings jsonb;
user_settings jsonb;
geojson jsonb;
_invalid_time boolean;
_invalid_interval boolean;
_invalid_distance boolean;
count_metric numeric;
previous_stays_id numeric;
current_stays_departed text;
current_stays_id numeric;
current_stays_active boolean;
extra_json jsonb;
geo jsonb;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
@@ -414,89 +405,22 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
PERFORM set_config('vessel.id', logbook_rec.vessel_id, false);
--RAISE WARNING 'public.process_logbook_queue_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Check if all metrics are within 50meters base on geo loc
count_metric := logbook_metrics_dwithin_fn(logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT, logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
RAISE NOTICE '-> process_logbook_queue_fn logbook_metrics_dwithin_fn count:[%]', count_metric;
-- Calculate logbook data average and geo
-- Update logbook entry with the latest metric data and calculate data
avg_rec := logbook_update_avg_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
geo_rec := logbook_update_geom_distance_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
-- Avoid/ignore/delete logbook stationary movement or time sync issue
-- Check time start vs end
SELECT logbook_rec._to_time::TIMESTAMPTZ < logbook_rec._from_time::TIMESTAMPTZ INTO _invalid_time;
-- Is distance is less than 0.010
SELECT geo_rec._track_distance < 0.010 INTO _invalid_distance;
-- Is duration is less than 100sec
SELECT (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ) < (100::text||' secs')::interval INTO _invalid_interval;
-- if stationary fix data metrics,logbook,stays,moorage
IF _invalid_time IS True OR _invalid_distance IS True
OR _invalid_interval IS True OR count_metric = avg_rec.count_metric
OR avg_rec.count_metric <= 2 THEN
RAISE NOTICE '-> process_logbook_queue_fn invalid logbook data id [%], _invalid_time [%], _invalid_distance [%], _invalid_interval [%], count_metric_in_zone [%], count_metric_log [%]',
logbook_rec.id, _invalid_time, _invalid_distance, _invalid_interval, count_metric, avg_rec.count_metric;
-- Update metrics status to moored
UPDATE api.metrics
SET status = 'moored'
WHERE time >= logbook_rec._from_time::TIMESTAMPTZ
AND time <= logbook_rec._to_time::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false);
-- Update logbook
UPDATE api.logbook
SET notes = 'invalid logbook data, stationary need to fix metrics?'
WHERE id = logbook_rec.id;
-- Get related stays
SELECT id,departed,active INTO current_stays_id,current_stays_departed,current_stays_active
FROM api.stays s
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived = logbook_rec._to_time;
-- Update related stays
UPDATE api.stays
SET notes = 'invalid stays data, stationary need to fix metrics?'
WHERE vessel_id = current_setting('vessel.id', false)
AND arrived = logbook_rec._to_time;
-- Find previous stays
SELECT id INTO previous_stays_id
FROM api.stays s
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived < logbook_rec._to_time
ORDER BY s.arrived DESC LIMIT 1;
-- Update previous stays with the departed time from current stays
-- and set the active state from current stays
UPDATE api.stays
SET departed = current_stays_departed::TIMESTAMPTZ,
active = current_stays_active
WHERE vessel_id = current_setting('vessel.id', false)
AND id = previous_stays_id;
-- Clean up, remove invalid logbook and stay entry
DELETE FROM api.logbook WHERE id = logbook_rec.id;
RAISE WARNING '-> process_logbook_queue_fn delete invalid logbook [%]', logbook_rec.id;
DELETE FROM api.stays WHERE id = current_stays_id;
RAISE WARNING '-> process_logbook_queue_fn delete invalid stays [%]', current_stays_id;
-- TODO should we subtract (-1) moorages ref count or reprocess it?!?
RETURN;
END IF;
-- Do we have an existing moorage within 300m of the new log
-- generate logbook name, concat _from_location and _to_location from moorage name
from_moorage := process_lat_lon_fn(logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
to_moorage := process_lat_lon_fn(logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC);
SELECT CONCAT(from_moorage.moorage_name, ' to ' , to_moorage.moorage_name) INTO log_name;
-- Generate logbook name, concat _from_location and _to_location
-- geo reverse _from_lng _from_lat
-- geo reverse _to_lng _to_lat
--geo := reverse_geocode_py_fn('nominatim', logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
--from_name := geo->>'name';
--geo := reverse_geocode_py_fn('nominatim', logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC);
--to_name := geo->>'name';
--SELECT CONCAT(from_name, ' to ' , to_name) INTO log_name;
-- Process `propulsion.*.runTime` and `navigation.log`
-- Calculate extra json
extra_json := logbook_update_extra_json_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
RAISE NOTICE 'Updating valid logbook entry [%] [%] [%]', logbook_rec.id, logbook_rec._from_time, logbook_rec._to_time;
RAISE NOTICE 'Updating valid logbook entry logbook id:[%] start:[%] end:[%]', logbook_rec.id, logbook_rec._from_time, logbook_rec._to_time;
UPDATE api.logbook
SET
duration = (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ),
@@ -510,7 +434,8 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
name = log_name,
track_geom = geo_rec._track_geom,
distance = geo_rec._track_distance,
extra = extra_json
extra = extra_json,
notes = NULL -- reset pre_log process
WHERE id = logbook_rec.id;
-- GeoJSON require track_geom field
@@ -531,8 +456,8 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
-- Process badges
RAISE NOTICE '-> debug process_logbook_queue_fn user_settings [%]', user_settings->>'email'::TEXT;
PERFORM set_config('user.email', user_settings->>'email'::TEXT, false);
PERFORM badges_logbook_fn(logbook_rec.id);
PERFORM badges_geom_fn(logbook_rec.id);
PERFORM badges_logbook_fn(logbook_rec.id, logbook_rec._to_time::TEXT);
PERFORM badges_geom_fn(logbook_rec.id, logbook_rec._to_time::TEXT);
END;
$process_logbook_queue$ LANGUAGE plpgsql;
-- Description
@@ -603,6 +528,9 @@ CREATE OR REPLACE FUNCTION process_stay_queue_fn(IN _id integer) RETURNS void AS
select sum(departed-arrived) from api.stays where moorage_id = moorage.moorage_id
)
WHERE id = moorage.moorage_id;
-- Process badges
PERFORM badges_moorages_fn();
END;
$process_stay_queue$ LANGUAGE plpgsql;
-- Description
@@ -712,7 +640,7 @@ $process_moorage_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_moorage_queue_fn
IS 'Handle moorage insert or update from stays';
IS 'Handle moorage insert or update from stays, deprecated';
-- process new account notification
DROP FUNCTION IF EXISTS process_account_queue_fn;
@@ -750,7 +678,7 @@ $process_account_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_account_queue_fn
IS 'process new account notification';
IS 'process new account notification, deprecated';
-- process new account otp validation notification
DROP FUNCTION IF EXISTS process_account_otp_validation_queue_fn;
@@ -790,7 +718,7 @@ $process_account_otp_validation_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_account_otp_validation_queue_fn
IS 'process new account otp validation notification';
IS 'process new account otp validation notification, deprecated';
-- process new event notification
DROP FUNCTION IF EXISTS process_notification_queue_fn;
@@ -843,7 +771,7 @@ $process_notification_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_notification_queue_fn
IS 'process new event type notification';
IS 'process new event type notification, new_account, new_vessel, email_otp';
-- process new vessel notification
DROP FUNCTION IF EXISTS process_vessel_queue_fn;
@@ -882,7 +810,7 @@ $process_vessel_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_vessel_queue_fn
IS 'process new vessel notification';
IS 'process new vessel notification, deprecated';
-- Get application settings details from a log entry
DROP FUNCTION IF EXISTS get_app_settings_fn;
@@ -899,14 +827,17 @@ BEGIN
name LIKE 'app.email%'
OR name LIKE 'app.pushover%'
OR name LIKE 'app.url'
OR name LIKE 'app.telegram%';
OR name LIKE 'app.telegram%'
OR name LIKE 'app.grafana_admin_uri'
OR name LIKE 'app.keycloak_uri'
OR name LIKE 'app.windy_apikey';
END;
$get_app_settings$
LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.get_app_settings_fn
IS 'get application settings details, email, pushover, telegram';
IS 'get application settings details, email, pushover, telegram, grafana_admin_uri';
DROP FUNCTION IF EXISTS get_app_url_fn;
CREATE OR REPLACE FUNCTION get_app_url_fn(OUT app_settings jsonb)
@@ -1012,9 +943,7 @@ AS $get_user_settings_from_vesselid$
'boat' , v.name,
'recipient', a.first,
'email', v.owner_email,
'settings', a.preferences,
'pushover_key', a.preferences->'pushover_key'
--'badges', a.preferences->'badges'
'settings', a.preferences
) INTO user_settings
FROM auth.accounts a, auth.vessels v, api.metadata m
WHERE m.vessel_id = v.vessel_id
@@ -1063,7 +992,7 @@ COMMENT ON FUNCTION
---------------------------------------------------------------------------
-- Badges
--
CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id integer) RETURNS VOID AS $badges_logbook$
CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id INTEGER, IN logbook_time TEXT) RETURNS VOID AS $badges_logbook$
DECLARE
_badges jsonb;
_exist BOOLEAN := null;
@@ -1081,7 +1010,7 @@ CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id integer) RETUR
select count(*) into total from api.logbook l where vessel_id = current_setting('vessel.id', false);
if total >= 1 then
-- Add badge
badge := '{"Helmsman": {"log": '|| logbook_id ||', "date":"' || NOW()::timestamp || '"}}';
badge := '{"Helmsman": {"log": '|| logbook_id ||', "date":"' || logbook_time || '"}}';
-- Get existing badges
SELECT preferences->'badges' INTO _badges FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Merge badges
@@ -1105,7 +1034,7 @@ CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id integer) RETUR
--RAISE WARNING '-> Wake Maker max_wind_speed %', max_wind_speed;
if max_wind_speed >= 15 then
-- Create badge
badge := '{"Wake Maker": {"log": '|| logbook_id ||', "date":"' || NOW()::timestamp || '"}}';
badge := '{"Wake Maker": {"log": '|| logbook_id ||', "date":"' || logbook_time || '"}}';
--RAISE WARNING '-> Wake Maker max_wind_speed badge %', badge;
-- Get existing badges
SELECT preferences->'badges' INTO _badges FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
@@ -1130,7 +1059,7 @@ CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id integer) RETUR
--RAISE WARNING '-> Stormtrooper max_wind_speed %', max_wind_speed;
if max_wind_speed >= 30 then
-- Create badge
badge := '{"Stormtrooper": {"log": '|| logbook_id ||', "date":"' || NOW()::timestamp || '"}}';
badge := '{"Stormtrooper": {"log": '|| logbook_id ||', "date":"' || logbook_time || '"}}';
--RAISE WARNING '-> Stormtrooper max_wind_speed badge %', badge;
-- Get existing badges
SELECT preferences->'badges' INTO _badges FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
@@ -1153,7 +1082,7 @@ CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id integer) RETUR
select l.distance into distance from api.logbook l where l.id = logbook_id AND l.distance >= 100 and vessel_id = current_setting('vessel.id', false);
if distance >= 100 then
-- Create badge
badge := '{"Navigator Award": {"log": '|| logbook_id ||', "date":"' || NOW()::timestamp || '"}}';
badge := '{"Navigator Award": {"log": '|| logbook_id ||', "date":"' || logbook_time || '"}}';
-- Get existing badges
SELECT preferences->'badges' INTO _badges FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Merge badges
@@ -1174,7 +1103,7 @@ CREATE OR REPLACE FUNCTION public.badges_logbook_fn(IN logbook_id integer) RETUR
select sum(l.distance) into distance from api.logbook l where vessel_id = current_setting('vessel.id', false);
if distance >= 1000 then
-- Create badge
badge := '{"Captain Award": {"log": '|| logbook_id ||', "date":"' || NOW()::timestamp || '"}}';
badge := '{"Captain Award": {"log": '|| logbook_id ||', "date":"' || logbook_time || '"}}';
-- Get existing badges
SELECT preferences->'badges' INTO _badges FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Merge badges
@@ -1281,7 +1210,7 @@ COMMENT ON FUNCTION
public.badges_moorages_fn
IS 'check moorages for new badges, eg: Explorer, Mooring Pro, Anchormaster';
CREATE OR REPLACE FUNCTION public.badges_geom_fn(IN logbook_id integer) RETURNS VOID AS $badges_geom$
CREATE OR REPLACE FUNCTION public.badges_geom_fn(IN logbook_id INTEGER, IN logbook_time TEXT) RETURNS VOID AS $badges_geom$
DECLARE
_badges jsonb;
_exist BOOLEAN := false;
@@ -1310,7 +1239,7 @@ CREATE OR REPLACE FUNCTION public.badges_geom_fn(IN logbook_id integer) RETURNS
--RAISE WARNING 'geography_marine [%]', _exist;
if _exist is false then
-- Create badge
badge := '{"' || marine_rec.name || '": {"log": '|| logbook_id ||', "date":"' || NOW()::timestamp || '"}}';
badge := '{"' || marine_rec.name || '": {"log": '|| logbook_id ||', "date":"' || logbook_time || '"}}';
-- Get existing badges
SELECT preferences->'badges' INTO _badges FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Merge badges
@@ -1333,8 +1262,8 @@ COMMENT ON FUNCTION
public.badges_geom_fn
IS 'check geometry logbook for new badges, eg: Tropic, Alaska, Geographic zone';
DROP FUNCTION IF EXISTS public.process_logbook_valid_fn;
CREATE OR REPLACE FUNCTION public.process_logbook_valid_fn(IN _id integer) RETURNS void AS $process_logbook_valid$
DROP FUNCTION IF EXISTS public.process_pre_logbook_fn;
CREATE OR REPLACE FUNCTION public.process_pre_logbook_fn(IN _id integer) RETURNS void AS $process_pre_logbook$
DECLARE
logbook_rec record;
avg_rec record;
@@ -1342,15 +1271,17 @@ CREATE OR REPLACE FUNCTION public.process_logbook_valid_fn(IN _id integer) RETUR
_invalid_time boolean;
_invalid_interval boolean;
_invalid_distance boolean;
_invalid_ratio boolean;
count_metric numeric;
previous_stays_id numeric;
current_stays_departed text;
current_stays_id numeric;
current_stays_active boolean;
timebucket boolean;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> process_logbook_valid_fn invalid input %', _id;
RAISE WARNING '-> process_pre_logbook_fn invalid input %', _id;
RETURN;
END IF;
-- Get the logbook record with all necessary fields exist
@@ -1364,16 +1295,16 @@ CREATE OR REPLACE FUNCTION public.process_logbook_valid_fn(IN _id integer) RETUR
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> process_logbook_valid_fn invalid logbook %', _id;
RAISE WARNING '-> process_pre_logbook_fn invalid logbook %', _id;
RETURN;
END IF;
PERFORM set_config('vessel.id', logbook_rec.vessel_id, false);
--RAISE WARNING 'public.process_logbook_queue_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Check if all metrics are within 10meters base on geo loc
-- Check if all metrics are within 50meters base on geo loc
count_metric := logbook_metrics_dwithin_fn(logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT, logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
RAISE NOTICE '-> process_logbook_valid_fn logbook_metrics_dwithin_fn count:[%]', count_metric;
RAISE NOTICE '-> process_pre_logbook_fn logbook_metrics_dwithin_fn count:[%]', count_metric;
-- Calculate logbook data average and geo
-- Update logbook entry with the latest metric data and calculate data
@@ -1387,11 +1318,18 @@ CREATE OR REPLACE FUNCTION public.process_logbook_valid_fn(IN _id integer) RETUR
SELECT geo_rec._track_distance < 0.010 INTO _invalid_distance;
-- Is duration is less than 100sec
SELECT (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ) < (100::text||' secs')::interval INTO _invalid_interval;
-- If we have less than 15 metrics
-- Is within metrics represent more or equal than 60% of the total entry
IF count_metric::NUMERIC <= 15 THEN
SELECT (count_metric::NUMERIC / avg_rec.count_metric::NUMERIC) >= 0.60 INTO _invalid_ratio;
END IF;
-- if stationary fix data metrics,logbook,stays,moorage
IF _invalid_time IS True OR _invalid_distance IS True
OR _invalid_interval IS True OR count_metric = avg_rec.count_metric THEN
RAISE NOTICE '-> process_logbook_queue_fn invalid logbook data id [%], _invalid_time [%], _invalid_distance [%], _invalid_interval [%], within count_metric == total count_metric [%]',
logbook_rec.id, _invalid_time, _invalid_distance, _invalid_interval, count_metric;
OR _invalid_interval IS True OR count_metric = avg_rec.count_metric
OR _invalid_ratio IS True
OR avg_rec.count_metric <= 3 THEN
RAISE NOTICE '-> process_pre_logbook_fn invalid logbook data id [%], _invalid_time [%], _invalid_distance [%], _invalid_interval [%], count_metric_in_zone [%], count_metric_log [%], _invalid_ratio [%]',
logbook_rec.id, _invalid_time, _invalid_distance, _invalid_interval, count_metric, avg_rec.count_metric, _invalid_ratio;
-- Update metrics status to moored
UPDATE api.metrics
SET status = 'moored'
@@ -1428,19 +1366,40 @@ CREATE OR REPLACE FUNCTION public.process_logbook_valid_fn(IN _id integer) RETUR
AND id = previous_stays_id;
-- Clean up, remove invalid logbook and stay entry
DELETE FROM api.logbook WHERE id = logbook_rec.id;
RAISE WARNING '-> process_logbook_queue_fn delete invalid logbook [%]', logbook_rec.id;
RAISE WARNING '-> process_pre_logbook_fn delete invalid logbook [%]', logbook_rec.id;
DELETE FROM api.stays WHERE id = current_stays_id;
RAISE WARNING '-> process_logbook_queue_fn delete invalid stays [%]', current_stays_id;
-- TODO should we subtract (-1) moorages ref count or reprocess it?!?
RAISE WARNING '-> process_pre_logbook_fn delete invalid stays [%]', current_stays_id;
RETURN;
END IF;
--IF (logbook_rec.notes IS NULL) THEN -- run one time only
-- -- If duration is over 24h or number of entry is over 400, check for stays and potential multiple logs with stationary location
-- IF (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ) > INTERVAL '24 hours'
-- OR avg_rec.count_metric > 400 THEN
-- timebucket := public.logbook_metrics_timebucket_fn('15 minutes'::TEXT, logbook_rec.id, logbook_rec._from_time::TIMESTAMPTZ, logbook_rec._to_time::TIMESTAMPTZ);
-- -- If true exit current process as the current logbook need to be re-process.
-- IF timebucket IS True THEN
-- RETURN;
-- END IF;
-- ELSE
-- timebucket := public.logbook_metrics_timebucket_fn('5 minutes'::TEXT, logbook_rec.id, logbook_rec._from_time::TIMESTAMPTZ, logbook_rec._to_time::TIMESTAMPTZ);
-- -- If true exit current process as the current logbook need to be re-process.
-- IF timebucket IS True THEN
-- RETURN;
-- END IF;
-- END IF;
--END IF;
-- Add logbook entry to process queue for later processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_logbook', logbook_rec.id, NOW(), current_setting('vessel.id', true));
END;
$process_logbook_valid$ LANGUAGE plpgsql;
$process_pre_logbook$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_logbook_queue_fn
IS 'Avoid/ignore/delete logbook stationary movement or time sync issue';
public.process_pre_logbook_fn
IS 'Detect/Avoid/ignore/delete logbook stationary movement or time sync issue';
DROP FUNCTION IF EXISTS process_lat_lon_fn;
CREATE OR REPLACE FUNCTION process_lat_lon_fn(IN lon NUMERIC, IN lat NUMERIC,
@@ -1459,10 +1418,9 @@ CREATE OR REPLACE FUNCTION process_lat_lon_fn(IN lon NUMERIC, IN lat NUMERIC,
geo jsonb;
overpass jsonb;
BEGIN
RAISE NOTICE 'process_lat_lon_fn';
-- If _id is valid, not NULL
RAISE NOTICE '-> process_lat_lon_fn';
IF lon IS NULL OR lat IS NULL THEN
RAISE WARNING '-> process_lat_lon_fn invalid input lon,lat %', _id;
RAISE WARNING '-> process_lat_lon_fn invalid input lon %, lat %', lon, lat;
--return NULL;
END IF;
@@ -1490,7 +1448,7 @@ CREATE OR REPLACE FUNCTION process_lat_lon_fn(IN lon NUMERIC, IN lat NUMERIC,
END IF;
END LOOP;
-- if with in 200m use existing name and stay_code
-- if with in 300m use existing name and stay_code
-- else insert new entry
IF existing_rec.id IS NOT NULL AND existing_rec.id > 0 THEN
RAISE NOTICE '-> process_lat_lon_fn found close by moorage using existing name and stay_code %', existing_rec;
@@ -1501,7 +1459,7 @@ CREATE OR REPLACE FUNCTION process_lat_lon_fn(IN lon NUMERIC, IN lat NUMERIC,
RAISE NOTICE '-> process_lat_lon_fn create new moorage';
-- query overpass api to guess moorage type
overpass := overpass_py_fn(lon::NUMERIC, lat::NUMERIC);
RAISE NOTICE '-> process_lat_lon_fn overpass name:[%] type:[%]', overpass->'name', overpass->'seamark:type';
RAISE NOTICE '-> process_lat_lon_fn overpass name:[%] seamark:type:[%]', overpass->'name', overpass->'seamark:type';
moorage_type = 1; -- Unknown
IF overpass->>'seamark:type' = 'harbour' AND overpass->>'seamark:harbour:category' = 'marina' then
moorage_type = 4; -- Dock
@@ -1557,6 +1515,207 @@ COMMENT ON FUNCTION
public.process_lat_lon_fn
IS 'Add or Update moorage base on lat/lon';
CREATE OR REPLACE FUNCTION public.logbook_metrics_timebucket_fn(
IN bucket_interval TEXT,
IN _id INTEGER,
IN _start TIMESTAMPTZ,
IN _end TIMESTAMPTZ,
OUT timebucket boolean) AS $logbook_metrics_timebucket$
DECLARE
time_rec record;
stay_rec record;
log_rec record;
geo_rec record;
ref_time timestamptz;
stay_id integer;
stay_lat DOUBLE PRECISION;
stay_lng DOUBLE PRECISION;
stay_arv timestamptz;
in_interval boolean := False;
log_id integer;
log_lat DOUBLE PRECISION;
log_lng DOUBLE PRECISION;
log_start timestamptz;
in_log boolean := False;
BEGIN
timebucket := False;
-- Agg metrics over a bucket_interval
RAISE NOTICE '-> logbook_metrics_timebucket_fn Starting loop by [%], _start[%], _end[%]', bucket_interval, _start, _end;
for time_rec in
WITH tbl_bucket AS (
SELECT time_bucket(bucket_interval::INTERVAL, time) AS time_bucket,
avg(speedoverground) AS speed,
last(latitude, time) AS lat,
last(longitude, time) AS lng,
st_makepoint(avg(longitude),avg(latitude)) AS geo_point
FROM api.metrics m
WHERE
m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND m.time >= _start::TIMESTAMPTZ
AND m.time <= _end::TIMESTAMPTZ
AND m.vessel_id = current_setting('vessel.id', false)
GROUP BY time_bucket
ORDER BY time_bucket asc
),
tbl_bucket2 AS (
SELECT time_bucket,
speed,
geo_point,lat,lng,
LEAD(time_bucket,1) OVER (
ORDER BY time_bucket asc
) time_interval,
LEAD(geo_point,1) OVER (
ORDER BY time_bucket asc
) geo_interval
FROM tbl_bucket
WHERE speed <= 0.5
)
SELECT time_bucket,
speed,
geo_point,lat,lng,
time_interval,
bucket_interval,
(bucket_interval::interval * 2) AS min_interval,
(time_bucket - time_interval) AS diff_interval,
(time_bucket - time_interval)::INTERVAL < (bucket_interval::interval * 2)::INTERVAL AS to_be_process
FROM tbl_bucket2
WHERE (time_bucket - time_interval)::INTERVAL < (bucket_interval::interval * 2)::INTERVAL
loop
RAISE NOTICE '-> logbook_metrics_timebucket_fn ref_time [%] interval [%] bucket_interval[%]', ref_time, time_rec.time_bucket, bucket_interval;
select ref_time + bucket_interval::interval * 1 >= time_rec.time_bucket into in_interval;
RAISE NOTICE '-> logbook_metrics_timebucket_fn ref_time+inverval[%] interval [%], in_interval [%]', ref_time + bucket_interval::interval * 1, time_rec.time_bucket, in_interval;
if ST_DWithin(Geography(ST_MakePoint(stay_lng, stay_lat)), Geography(ST_MakePoint(time_rec.lng, time_rec.lat)), 50) IS True then
in_interval := True;
end if;
if ST_DWithin(Geography(ST_MakePoint(log_lng, log_lat)), Geography(ST_MakePoint(time_rec.lng, time_rec.lat)), 50) IS False then
in_interval := False;
end if;
if in_interval is true then
ref_time := time_rec.time_bucket;
end if;
RAISE NOTICE '-> logbook_metrics_timebucket_fn ref_time is stay within of next point %', ST_DWithin(Geography(ST_MakePoint(stay_lng, stay_lat)), Geography(ST_MakePoint(time_rec.lng, time_rec.lat)), 50);
RAISE NOTICE '-> logbook_metrics_timebucket_fn ref_time is NOT log within of next point %', ST_DWithin(Geography(ST_MakePoint(log_lng, log_lat)), Geography(ST_MakePoint(time_rec.lng, time_rec.lat)), 50);
if time_rec.time_bucket::TIMESTAMPTZ < _start::TIMESTAMPTZ + bucket_interval::interval * 1 then
in_interval := True;
end if;
RAISE NOTICE '-> logbook_metrics_timebucket_fn ref_time is NOT before start[%] or +interval[%]', (time_rec.time_bucket::TIMESTAMPTZ < _start::TIMESTAMPTZ), (time_rec.time_bucket::TIMESTAMPTZ < _start::TIMESTAMPTZ + bucket_interval::interval * 1);
continue when in_interval is True;
RAISE NOTICE '-> logbook_metrics_timebucket_fn after continue stay_id[%], in_log[%]', stay_id, in_log;
if stay_id is null THEN
RAISE NOTICE '-> Close current logbook logbook_id ref_time [%] time_rec.time_bucket [%]', ref_time, time_rec.time_bucket;
-- Close current logbook
geo_rec := logbook_update_geom_distance_fn(_id, _start::TEXT, time_rec.time_bucket::TEXT);
UPDATE api.logbook
SET
active = false,
_to_time = time_rec.time_bucket,
_to_lat = time_rec.lat,
_to_lng = time_rec.lng,
track_geom = geo_rec._track_geom,
notes = 'updated time_bucket'
WHERE id = _id;
-- Add logbook entry to process queue for later processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('pre_logbook', _id, NOW(), current_setting('vessel.id', true));
RAISE WARNING '-> Updated existing logbook logbook_id [%] [%] and add to process_queue', _id, time_rec.time_bucket;
-- Add new stay
INSERT INTO api.stays
(vessel_id, active, arrived, latitude, longitude, notes)
VALUES (current_setting('vessel.id', false), false, time_rec.time_bucket, time_rec.lat, time_rec.lng, 'autogenerated time_bucket')
RETURNING id, latitude, longitude, arrived INTO stay_id, stay_lat, stay_lng, stay_arv;
RAISE WARNING '-> Add new stay stay_id [%] [%]', stay_id, time_rec.time_bucket;
timebucket := True;
elsif in_log is false THEN
-- Close current stays
UPDATE api.stays
SET
active = false,
departed = ref_time,
notes = 'autogenerated time_bucket'
WHERE id = stay_id;
-- Add stay entry to process queue for further processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_stay', stay_id, now(), current_setting('vessel.id', true));
RAISE WARNING '-> Updated existing stays stay_id [%] departed [%] and add to process_queue', stay_id, ref_time;
-- Add new logbook
INSERT INTO api.logbook
(vessel_id, active, _from_time, _from_lat, _from_lng, notes)
VALUES (current_setting('vessel.id', false), false, ref_time, stay_lat, stay_lng, 'autogenerated time_bucket')
RETURNING id, _from_lat, _from_lng, _from_time INTO log_id, log_lat, log_lng, log_start;
RAISE WARNING '-> Add new logbook, logbook_id [%] [%]', log_id, ref_time;
in_log := true;
stay_id := 0;
stay_lat := null;
stay_lng := null;
timebucket := True;
elsif in_log is true THEN
RAISE NOTICE '-> Close current logbook logbook_id [%], ref_time [%], time_rec.time_bucket [%]', log_id, ref_time, time_rec.time_bucket;
-- Close current logbook
geo_rec := logbook_update_geom_distance_fn(_id, log_start::TEXT, time_rec.time_bucket::TEXT);
UPDATE api.logbook
SET
active = false,
_to_time = time_rec.time_bucket,
_to_lat = time_rec.lat,
_to_lng = time_rec.lng,
track_geom = geo_rec._track_geom,
notes = 'autogenerated time_bucket'
WHERE id = log_id;
-- Add logbook entry to process queue for later processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('pre_logbook', log_id, NOW(), current_setting('vessel.id', true));
RAISE WARNING '-> Update Existing logbook logbook_id [%] [%] and add to process_queue', log_id, time_rec.time_bucket;
-- Add new stay
INSERT INTO api.stays
(vessel_id, active, arrived, latitude, longitude, notes)
VALUES (current_setting('vessel.id', false), false, time_rec.time_bucket, time_rec.lat, time_rec.lng, 'autogenerated time_bucket')
RETURNING id, latitude, longitude, arrived INTO stay_id, stay_lat, stay_lng, stay_arv;
RAISE WARNING '-> Add new stay stay_id [%] [%]', stay_id, time_rec.time_bucket;
in_log := false;
log_id := null;
log_lat := null;
log_lng := null;
timebucket := True;
end if;
RAISE WARNING '-> Update new ref_time [%]', ref_time;
ref_time := time_rec.time_bucket;
end loop;
RAISE NOTICE '-> logbook_metrics_timebucket_fn Ending loop stay_id[%], in_log[%]', stay_id, in_log;
if in_log is true then
RAISE NOTICE '-> Ending log ref_time [%] interval [%]', ref_time, time_rec.time_bucket;
end if;
if stay_id > 0 then
RAISE NOTICE '-> Ending stay ref_time [%] interval [%]', ref_time, time_rec.time_bucket;
select * into stay_rec from api.stays s where arrived = _end;
-- Close current stays
UPDATE api.stays
SET
active = false,
arrived = stay_arv,
notes = 'updated time_bucket'
WHERE id = stay_rec.id;
-- Add stay entry to process queue for further processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_stay', stay_rec.id, now(), current_setting('vessel.id', true));
RAISE WARNING '-> Ending Update Existing stays stay_id [%] arrived [%] and add to process_queue', stay_rec.id, stay_arv;
delete from api.stays where id = stay_id;
RAISE WARNING '-> Ending Delete Existing stays stay_id [%]', stay_id;
stay_arv := null;
stay_id := null;
stay_lat := null;
stay_lng := null;
timebucket := True;
end if;
END;
$logbook_metrics_timebucket$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_metrics_timebucket_fn
IS 'Check if all entries for a logbook are in stationary movement per time bucket of 15 or 5 min, speed < 0.6knot, d_within 50m of the stay point';
---------------------------------------------------------------------------
-- TODO add alert monitoring for Battery
@@ -1649,6 +1808,12 @@ BEGIN
--RAISE WARNING 'public.check_jwt() user_role vessel.id [%]', current_setting('vessel.id', false);
--RAISE WARNING 'public.check_jwt() user_role vessel.name [%]', current_setting('vessel.name', false);
ELSIF _role = 'vessel_role' THEN
SELECT current_setting('request.path', true) into _path;
--RAISE WARNING 'req path %', current_setting('request.path', true);
-- Function allow without defined vessel like for anonymous role
IF _path ~ '^\/rpc\/(oauth_\w+)$' THEN
RETURN;
END IF;
-- Extract vessel_id from jwt token
SELECT current_setting('request.jwt.claims', true)::json->>'vid' INTO _vid;
-- Check the vessel and user exist
@@ -1666,7 +1831,7 @@ BEGIN
--RAISE WARNING 'public.check_jwt() user_role vessel.name %', current_setting('vessel.name', false);
--RAISE WARNING 'public.check_jwt() user_role vessel.id %', current_setting('vessel.id', false);
ELSIF _role = 'api_anonymous' THEN
RAISE WARNING 'public.check_jwt() api_anonymous';
--RAISE WARNING 'public.check_jwt() api_anonymous';
-- Check if path is the a valid allow anonymous path
SELECT current_setting('request.path', true) ~ '^/(logs_view|log_view|rpc/timelapse_fn|monitoring_view|stats_logs_view|stats_moorages_view|rpc/stats_logs_fn)$' INTO _ppath;
if _ppath is True then
@@ -1703,7 +1868,7 @@ BEGIN
END IF;
-- Check if boat name match public_vessel name
boat := '^' || _pvessel || '$';
IF _ptype ~ '^public_(logs|timelapse)$' AND _pid IS NOT NULL THEN
IF _ptype ~ '^public_(logs|timelapse)$' AND _pid > 0 THEN
WITH log as (
SELECT vessel_id from api.logbook l where l.id = _pid
)
@@ -1757,6 +1922,8 @@ BEGIN
-- In correct order
perform public.cron_process_new_notification_fn();
perform public.cron_process_monitor_online_fn();
--perform public.cron_process_grafana_fn();
perform public.cron_process_pre_logbook_fn();
perform public.cron_process_new_logbook_fn();
perform public.cron_process_new_stay_fn();
--perform public.cron_process_new_moorage_fn();
@@ -1769,17 +1936,17 @@ $$ language plpgsql;
CREATE OR REPLACE FUNCTION public.delete_account_fn(IN _email TEXT, IN _vessel_id TEXT) RETURNS BOOLEAN
AS $delete_account$
BEGIN
select count(*) from api.metrics m where vessel_id = _vessel_id;
--select count(*) from api.metrics m where vessel_id = _vessel_id;
delete from api.metrics m where vessel_id = _vessel_id;
select * from api.metadata m where vessel_id = _vessel_id;
delete from api.logbook l where vessel_id = _vessel_id;
--select * from api.metadata m where vessel_id = _vessel_id;
delete from api.moorages m where vessel_id = _vessel_id;
delete from api.logbook l where vessel_id = _vessel_id;
delete from api.stays s where vessel_id = _vessel_id;
delete from api.metadata m where vessel_id = _vessel_id;
select * from auth.vessels v where vessel_id = _vessel_id;
--select * from auth.vessels v where vessel_id = _vessel_id;
delete from auth.vessels v where vessel_id = _vessel_id;
select * from auth.accounts a where email = _email;
delete from auth.accounts a where email = _email;
--select * from auth.accounts a where email = _email;
delete from auth.accounts a where email = _email;
RETURN True;
END
$delete_account$ language plpgsql;

View File

@@ -151,3 +151,90 @@ $jsonb_diff_val$ LANGUAGE plpgsql;
COMMENT ON FUNCTION
public.jsonb_diff_val
IS 'Compare two jsonb objects';
---------------------------------------------------------------------------
-- uuid v7 helpers
--
-- https://gist.github.com/kjmph/5bd772b2c2df145aa645b837da7eca74
CREATE OR REPLACE FUNCTION public.timestamp_from_uuid_v7(_uuid uuid)
RETURNS timestamp without time zone
LANGUAGE sql
-- Based off IETF draft, https://datatracker.ietf.org/doc/draft-peabody-dispatch-new-uuid-format/
IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF
AS $$
SELECT to_timestamp(('x0000' || substr(_uuid::text, 1, 8) || substr(_uuid::text, 10, 4))::bit(64)::bigint::numeric / 1000);
$$
;
-- Description
COMMENT ON FUNCTION
public.timestamp_from_uuid_v7
IS 'extract the timestamp from the uuid.';
create or replace function public.uuid_generate_v7()
returns uuid
as $$
begin
-- use random v4 uuid as starting point (which has the same variant we need)
-- then overlay timestamp
-- then set version 7 by flipping the 2 and 1 bit in the version 4 string
return encode(
set_bit(
set_bit(
overlay(uuid_send(gen_random_uuid())
placing substring(int8send(floor(extract(epoch from clock_timestamp()) * 1000)::bigint) from 3)
from 1 for 6
),
52, 1
),
53, 1
),
'hex')::uuid;
end
$$
language plpgsql volatile;
-- Description
COMMENT ON FUNCTION
public.uuid_generate_v7
IS 'Generate UUID v7, Based off IETF draft, https://datatracker.ietf.org/doc/draft-peabody-dispatch-new-uuid-format/';
---------------------------------------------------------------------------
-- Conversion helpers
--
CREATE OR REPLACE FUNCTION public.kelvinToCel(IN temperature NUMERIC)
RETURNS NUMERIC
AS $$
BEGIN
RETURN ROUND((((temperature)::numeric - 273.15) * 10) / 10);
END
$$
LANGUAGE plpgsql IMMUTABLE;
-- Description
COMMENT ON FUNCTION
public.kelvinToCel
IS 'convert kelvin To Celsius';
CREATE OR REPLACE FUNCTION public.radiantToDegrees(IN angle NUMERIC)
RETURNS NUMERIC
AS $$
BEGIN
RETURN ROUND((((angle)::numeric * 57.2958) * 10) / 10);
END
$$
LANGUAGE plpgsql IMMUTABLE;
-- Description
COMMENT ON FUNCTION
public.radiantToDegrees
IS 'convert radiant To Degrees';
CREATE OR REPLACE FUNCTION public.valToPercent(IN val NUMERIC)
RETURNS NUMERIC
AS $$
BEGIN
RETURN (val * 100);
END
$$
LANGUAGE plpgsql IMMUTABLE;
-- Description
COMMENT ON FUNCTION
public.valToPercent
IS 'convert radiant To Degrees';

View File

@@ -104,12 +104,12 @@ CREATE OR REPLACE FUNCTION send_email_py_fn(IN email_type TEXT, IN _user JSONB,
AS $send_email_py$
# Import smtplib for the actual sending function
import smtplib
# Import the email modules we need
#from email.message import EmailMessage
from email.utils import formatdate,make_msgid
from email.mime.text import MIMEText
# Use the shared cache to avoid preparing the email metadata
if email_type in SD:
plan = SD[email_type]
@@ -142,6 +142,8 @@ AS $send_email_py$
email_content = email_content.replace('__OTP_CODE__', _user['otp_code'])
if 'reset_qs' in _user and _user['reset_qs']:
email_content = email_content.replace('__RESET_QS__', _user['reset_qs'])
if 'alert' in _user and _user['alert']:
email_content = email_content.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
email_content = email_content.replace('__APP_URL__', app['app.url'])
@@ -231,6 +233,8 @@ AS $send_pushover_py$
pushover_message = pushover_message.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
pushover_message = pushover_message.replace('__BADGE_NAME__', _user['badge'])
if 'alert' in _user and _user['alert']:
pushover_message = pushover_message.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
pushover_message = pushover_message.replace('__APP_URL__', app['app.url'])
@@ -307,6 +311,8 @@ AS $send_telegram_py$
telegram_message = telegram_message.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
telegram_message = telegram_message.replace('__BADGE_NAME__', _user['badge'])
if 'alert' in _user and _user['alert']:
telegram_message = telegram_message.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
telegram_message = telegram_message.replace('__APP_URL__', app['app.url'])
@@ -381,11 +387,11 @@ AS $reverse_geoip_py$
#plpy.notice('IP [{}] [{}]'.format(_ip, r.status_code))
if r.status_code == 200:
#plpy.notice('Got [{}] [{}]'.format(r.text, r.status_code))
return r.json();
return r.json()
else:
plpy.error('Failed to get ip details')
return '{}'
$reverse_geoip_py$ LANGUAGE plpython3u;
return {}
$reverse_geoip_py$ IMMUTABLE strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.reverse_geoip_py_fn
@@ -434,7 +440,7 @@ IMMUTABLE STRICT;
-- Description
COMMENT ON FUNCTION
public.geojson_py_fn
IS 'Parse geojson using plpython3u (should be done in PGSQL)';
IS 'Parse geojson using plpython3u (should be done in PGSQL), deprecated';
DROP FUNCTION IF EXISTS overpass_py_fn;
CREATE OR REPLACE FUNCTION overpass_py_fn(IN lon NUMERIC, IN lat NUMERIC,
@@ -442,7 +448,7 @@ CREATE OR REPLACE FUNCTION overpass_py_fn(IN lon NUMERIC, IN lat NUMERIC,
AS $overpass_py$
"""
Return https://overpass-turbo.eu seamark details within 400m
https://overpass-turbo.eu/s/1D91
https://overpass-turbo.eu/s/1EaG
https://wiki.openstreetmap.org/wiki/Key:seamark:type
"""
import requests
@@ -452,6 +458,12 @@ AS $overpass_py$
headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com'}
payload = """
[out:json][timeout:20];
is_in({0},{1})->.result_areas;
(
area.result_areas["seamark:type"~"(mooring|harbour)"][~"^seamark:.*:category$"~"."];
area.result_areas["leisure"="marina"][~"name"~"."];
);
out tags;
nwr(around:400.0,{0},{1})->.all;
(
nwr.all["seamark:type"~"(mooring|harbour)"][~"^seamark:.*:category$"~"."];
@@ -459,7 +471,7 @@ AS $overpass_py$
nwr.all["leisure"="marina"];
nwr.all["natural"~"(bay|beach)"];
);
out tags qt;
out tags;
""".format(lat, lon)
data = urllib.parse.quote(payload, safe="");
url = f'https://overpass-api.de/api/interpreter?data={data}'.format(data)
@@ -473,12 +485,375 @@ AS $overpass_py$
if r_dict["elements"]:
if "tags" in r_dict["elements"][0] and r_dict["elements"][0]["tags"]:
return r_dict["elements"][0]["tags"]; # return the first element
return '{}'
return {}
else:
plpy.notice('overpass-api Failed to get overpass-api details')
return '{}'
return {}
$overpass_py$ IMMUTABLE strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.overpass_py_fn
IS 'Return https://overpass-turbo.eu seamark details within 400m using plpython3u';
---------------------------------------------------------------------------
-- Provision Grafana SQL
--
CREATE OR REPLACE FUNCTION grafana_py_fn(IN _v_name TEXT, IN _v_id TEXT,
IN _u_email TEXT, IN app JSONB) RETURNS VOID
AS $grafana_py$
"""
https://grafana.com/docs/grafana/latest/developers/http_api/
Create organization base on vessel name
Create user base on user email
Add user to organization
Add data_source to organization
Add dashboard to organization
Update organization preferences
"""
import requests
import json
import re
grafana_uri = None
if 'app.grafana_admin_uri' in app and app['app.grafana_admin_uri']:
grafana_uri = app['app.grafana_admin_uri']
else:
plpy.error('Error no grafana_admin_uri defined, check app settings')
return None
b_name = None
if not _v_name:
b_name = _v_id
else:
b_name = _v_name
# add vessel org
headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com',
'Accept': 'application/json', 'Content-Type': 'application/json'}
path = 'api/orgs'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data_dict = {'name':b_name}
data = json.dumps(data_dict)
r = requests.post(url, data=data, headers=headers)
#print(r.text)
#plpy.notice(r.json())
if r.status_code == 200 and "orgId" in r.json():
org_id = r.json()['orgId']
else:
plpy.error('Error grafana add vessel org %', r.json())
return None
# add user to vessel org
path = 'api/admin/users'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data_dict = {'orgId':org_id, 'email':_u_email, 'password':'asupersecretpassword'}
data = json.dumps(data_dict)
r = requests.post(url, data=data, headers=headers)
#print(r.text)
#plpy.notice(r.json())
if r.status_code == 200 and "id" in r.json():
user_id = r.json()['id']
else:
plpy.error('Error grafana add user to vessel org')
return
# read data_source
path = 'api/datasources/1'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
r = requests.get(url, headers=headers)
#print(r.text)
#plpy.notice(r.json())
data_source = r.json()
data_source['id'] = 0
data_source['orgId'] = org_id
data_source['uid'] = "ds_" + _v_id
data_source['name'] = "ds_" + _v_id
data_source['secureJsonData'] = {}
data_source['secureJsonData']['password'] = 'password'
data_source['readOnly'] = True
del data_source['secureJsonFields']
# add data_source to vessel org
path = 'api/datasources'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data = json.dumps(data_source)
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.post(url, data=data, headers=headers)
#plpy.notice(r.json())
del headers['X-Grafana-Org-Id']
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana add data_source to vessel org')
return
dashboards_tpl = [ 'pgsail_tpl_electrical', 'pgsail_tpl_logbook', 'pgsail_tpl_monitor', 'pgsail_tpl_rpi', 'pgsail_tpl_solar', 'pgsail_tpl_weather', 'pgsail_tpl_home']
for dashboard in dashboards_tpl:
# read dashboard template by uid
path = 'api/dashboards/uid'
url = f'{grafana_uri}/{path}/{dashboard}'.format(grafana_uri,path,dashboard)
if 'X-Grafana-Org-Id' in headers:
del headers['X-Grafana-Org-Id']
r = requests.get(url, headers=headers)
#plpy.notice(r.json())
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana read dashboard template')
return
new_dashboard = r.json()
del new_dashboard['meta']
new_dashboard['dashboard']['version'] = 0
new_dashboard['dashboard']['id'] = 0
new_uid = re.sub(r'pgsail_tpl_(.*)', r'postgsail_\1', new_dashboard['dashboard']['uid'])
new_dashboard['dashboard']['uid'] = f'{new_uid}_{_v_id}'.format(new_uid,_v_id)
# add dashboard to vessel org
path = 'api/dashboards/db'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data = json.dumps(new_dashboard)
new_data = data.replace('PCC52D03280B7034C', data_source['uid'])
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.post(url, data=new_data, headers=headers)
#plpy.notice(r.json())
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana add dashboard to vessel org')
return
# Update Org Prefs
path = 'api/org/preferences'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
home_dashboard = {}
home_dashboard['timezone'] = 'utc'
home_dashboard['homeDashboardUID'] = f'postgsail_home_{_v_id}'.format(_v_id)
data = json.dumps(home_dashboard)
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.patch(url, data=data, headers=headers)
#plpy.notice(r.json())
if r.status_code != 200:
plpy.error('Error grafana update org preferences')
return
plpy.notice('Done')
$grafana_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.grafana_py_fn
IS 'Grafana Organization,User,data_source,dashboards provisioning via HTTP API using plpython3u';
-- https://stackoverflow.com/questions/65517230/how-to-set-user-attribute-value-in-keycloak-using-api
DROP FUNCTION IF EXISTS keycloak_py_fn;
CREATE OR REPLACE FUNCTION keycloak_py_fn(IN user_id TEXT, IN vessel_id TEXT,
IN app JSONB) RETURNS JSONB
AS $keycloak_py$
"""
Add vessel_id user attribute to keycloak user {user_id}
"""
import requests
import json
import urllib.parse
safe_uri = host = user = pwd = None
if 'app.keycloak_uri' in app and app['app.keycloak_uri']:
#safe_uri = urllib.parse.quote(app['app.keycloak_uri'], safe=':/?&=')
_ = urllib.parse.urlparse(app['app.keycloak_uri'])
host = _.netloc.split('@')[-1]
user = _.netloc.split(':')[0]
pwd = _.netloc.split(':')[1].split('@')[0]
else:
plpy.error('Error no keycloak_uri defined, check app settings')
return None
if not host or not user or not pwd:
plpy.error('Error parsing keycloak_uri, check app settings')
return None
_headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com'}
_payload = {'client_id':'admin-cli','grant_type':'password','username':user,'password':pwd}
url = f'{_.scheme}://{host}/realms/master/protocol/openid-connect/token'.format(_.scheme, host)
r = requests.post(url, headers=_headers, data=_payload, timeout=(5, 60))
#print(r.text)
#plpy.notice(url)
if r.status_code == 200 and 'access_token' in r.json():
response = r.json()
plpy.notice(response)
_headers['Authorization'] = 'Bearer '+ response['access_token']
_headers['Content-Type'] = 'application/json'
_payload = { 'attributes': {'vessel_id': vessel_id} }
url = f'{keycloak_uri}/admin/realms/postgsail/users/{user_id}'.format(keycloak_uri,user_id)
#plpy.notice(url)
#plpy.notice(_payload)
data = json.dumps(_payload)
r = requests.put(url, headers=_headers, data=data, timeout=(5, 60))
if r.status_code != 204:
plpy.notice("Error updating user: {status} [{text}]".format(
status=r.status_code, text=r.text))
return None
else:
plpy.notice("Updated user : {user} [{text}]".format(user=user_id, text=r.text))
else:
plpy.notice(f'Error getting admin access_token: {status} [{text}]'.format(
status=r.status_code, text=r.text))
return None
$keycloak_py$ strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.keycloak_py_fn
IS 'Set oauth user attribute into keycloak using plpython3u';
DROP FUNCTION IF EXISTS keycloak_auth_py_fn;
CREATE OR REPLACE FUNCTION keycloak_auth_py_fn(IN _v_id TEXT,
IN _user JSONB, IN app JSONB) RETURNS JSONB
AS $keycloak_auth_py$
"""
Add keycloak user
"""
import requests
import json
import urllib.parse
safe_uri = host = user = pwd = None
if 'app.keycloak_uri' in app and app['app.keycloak_uri']:
#safe_uri = urllib.parse.quote(app['app.keycloak_uri'], safe=':/?&=')
_ = urllib.parse.urlparse(app['app.keycloak_uri'])
host = _.netloc.split('@')[-1]
user = _.netloc.split(':')[0]
pwd = _.netloc.split(':')[1].split('@')[0]
else:
plpy.error('Error no keycloak_uri defined, check app settings')
return none
if not host or not user or not pwd:
plpy.error('Error parsing keycloak_uri, check app settings')
return None
if not 'email' in _user and _user['email']:
plpy.error('Error parsing user email, check user settings')
return none
if not _v_id:
plpy.error('Error parsing vessel_id')
return none
_headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com'}
_payload = {'client_id':'admin-cli','grant_type':'password','username':user,'password':pwd}
url = f'{_.scheme}://{host}/realms/master/protocol/openid-connect/token'.format(_.scheme, host)
r = requests.post(url, headers=_headers, data=_payload, timeout=(5, 60))
#print(r.text)
#plpy.notice(url)
if r.status_code == 200 and 'access_token' in r.json():
response = r.json()
#plpy.notice(response)
_headers['Authorization'] = 'Bearer '+ response['access_token']
_headers['Content-Type'] = 'application/json'
url = f'{_.scheme}://{host}/admin/realms/postgsail/users'.format(_.scheme, host)
_payload = {
"enabled": "true",
"email": _user['email'],
"firstName": _user['recipient'],
"attributes": {"vessel_id": _v_id},
"emailVerified": True,
"requiredActions":["UPDATE_PROFILE", "UPDATE_PASSWORD"]
}
#plpy.notice(_payload)
data = json.dumps(_payload)
r = requests.post(url, headers=_headers, data=data, timeout=(5, 60))
if r.status_code != 201:
#print("Error creating user: {status}".format(status=r.status_code))
plpy.error(f'Error creating user: {user} {status}'.format(user=_payload['email'], status=r.status_code))
return None
else:
#print("Created user : {u}]".format(u=_payload['email']))
plpy.notice('Created user : {u} {t}, {l}'.format(u=_payload['email'], t=r.text, l=r.headers['location']))
user_url = "{user_url}/execute-actions-email".format(user_url=r.headers['location'])
_payload = ["UPDATE_PASSWORD"]
#plpy.notice(_payload)
data = json.dumps(_payload)
r = requests.put(user_url, headers=_headers, data=data, timeout=(5, 60))
if r.status_code != 204:
plpy.error('Error execute-actions-email: {u} {s}'.format(u=_user['email'], s=r.status_code))
else:
plpy.notice('execute-actions-email: {u} {s}'.format(u=_user['email'], s=r.status_code))
return None
else:
plpy.error(f'Error getting admin access_token: {status}'.format(status=r.status_code))
return None
$keycloak_auth_py$ strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.keycloak_auth_py_fn
IS 'Create an oauth user into keycloak using plpython3u';
CREATE OR REPLACE FUNCTION windy_pws_py_fn(IN metric JSONB,
IN _user JSONB, IN app JSONB) RETURNS JSONB
AS $windy_pws_py$
"""
Send environment data from boat instruments to Windy as a Personal Weather Station (PWS)
https://community.windy.com/topic/8168/report-your-weather-station-data-to-windy
"""
import requests
import json
import decimal
if not 'app.windy_apikey' in app and not app['app.windy_apikey']:
plpy.error('Error no windy_apikey defined, check app settings')
return none
if not 'station' in metric and not metric['station']:
plpy.error('Error no metrics defined')
return none
if not 'temp' in metric and not metric['temp']:
plpy.error('Error no metrics defined')
return none
if not _user:
plpy.error('Error no user defined, check user settings')
return none
_headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com', 'Content-Type': 'application/json'}
_payload = {
'stations': [
{ 'station': int(decimal.Decimal(metric['station'])),
'name': metric['name'],
'shareOption': 'Open',
'type': 'SignalK PostgSail Plugin',
'provider': 'PostgSail',
'url': 'https://iot.openplotter.cloud/{name}/monitoring'.format(name=metric['name']),
'lat': float(decimal.Decimal(metric['lat'])),
'lon': float(decimal.Decimal(metric['lon'])),
'elevation': 1 }
],
'observations': [
{ 'station': int(decimal.Decimal(metric['station'])),
'temp': float(decimal.Decimal(metric['temp'])),
'wind': round(float(decimal.Decimal(metric['wind']))),
'gust': round(float(decimal.Decimal(metric['wind']))),
'winddir': int(decimal.Decimal(metric['winddir'])),
'pressure': int(decimal.Decimal(metric['pressure'])),
'rh': float(decimal.Decimal(metric['rh'])) }
]}
#print(_payload)
#plpy.notice(_payload)
data = json.dumps(_payload)
api_url = 'https://stations.windy.com/pws/update/{api_key}'.format(api_key=app['app.windy_apikey'])
r = requests.post(api_url, data=data, headers=_headers, timeout=(5, 60))
#print(r.text)
#plpy.notice(api_url)
if r.status_code == 200:
#print('Data sent successfully!')
plpy.notice('Data sent successfully to Windy!')
#plpy.notice(api_url)
if not 'windy' in _user['settings']:
api_url = 'https://stations.windy.com/pws/station/{api_key}/{station}'.format(api_key=app['app.windy_apikey'], station=metric['station'])
#print(r.text)
#plpy.notice(api_url)
r = requests.get(api_url, timeout=(5, 60))
if r.status_code == 200:
#print('Windy Personal Weather Station created successfully in Windy Stations!')
plpy.notice('Windy Personal Weather Station created successfully in Windy Stations!')
return r.json()
else:
plpy.error(f'Failed to gather PWS details. Status code: {r.status_code}')
else:
plpy.error(f'Failed to send data. Status code: {r.status_code}')
#print(f'Failed to send data. Status code: {r.status_code}')
#print(r.text)
return {}
$windy_pws_py$ strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.windy_pws_py_fn
IS 'Forward vessel data to Windy as a Personal Weather Station using plpython3u';

View File

@@ -21,7 +21,8 @@ CREATE EXTENSION IF NOT EXISTS "pgcrypto"; -- provides cryptographic functions
DROP TABLE IF EXISTS auth.accounts CASCADE;
CREATE TABLE IF NOT EXISTS auth.accounts (
public_id INT UNIQUE NOT NULL GENERATED ALWAYS AS IDENTITY,
id INT UNIQUE GENERATED ALWAYS AS IDENTITY,
--id TEXT NOT NULL UNIQUE DEFAULT uuid_generate_v7(),
user_id TEXT NOT NULL UNIQUE DEFAULT RIGHT(gen_random_uuid()::text, 12),
email CITEXT PRIMARY KEY CHECK ( email ~* '^.+@.+\..+$' ),
first TEXT NOT NULL CHECK (length(pass) < 512),
@@ -60,32 +61,56 @@ CREATE TABLE IF NOT EXISTS auth.vessels (
vessel_id TEXT NOT NULL UNIQUE DEFAULT RIGHT(gen_random_uuid()::text, 12),
-- user_id TEXT NOT NULL REFERENCES auth.accounts(user_id) ON DELETE RESTRICT,
owner_email CITEXT PRIMARY KEY REFERENCES auth.accounts(email) ON DELETE RESTRICT,
-- mmsi TEXT UNIQUE, -- Should be a numeric range between 100000000 and 800000000.
mmsi NUMERIC UNIQUE, -- MMSI can be optional but if present must be a valid one and unique
name TEXT NOT NULL CHECK (length(name) >= 3 AND length(name) < 512),
-- pass text not null check (length(pass) < 512), -- unused
role name not null check (length(role) < 512),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
-- CONSTRAINT valid_length_mmsi CHECK (length(mmsi) < 10 OR length(mmsi) = 0)
CONSTRAINT valid_range_mmsi CHECK (mmsi > 100000000 AND mmsi < 800000000)
);
-- Description
COMMENT ON TABLE
auth.vessels
IS 'vessels table link to accounts email user_id column';
-- Indexes
CREATE INDEX vessels_vesselid_idx ON auth.vessels (vessel_id);
COMMENT ON COLUMN
auth.vessels.mmsi
IS 'MMSI can be optional but if present must be a valid one and unique but must be in numeric range between 100000000 and 800000000';
CREATE TRIGGER vessels_moddatetime
BEFORE UPDATE ON auth.vessels
FOR EACH ROW
EXECUTE PROCEDURE moddatetime (updated_at);
BEFORE UPDATE ON auth.vessels
FOR EACH ROW
EXECUTE PROCEDURE moddatetime (updated_at);
-- Description
COMMENT ON TRIGGER vessels_moddatetime
ON auth.vessels
IS 'Automatic update of updated_at on table modification';
CREATE TABLE auth.users (
id NAME PRIMARY KEY DEFAULT current_setting('request.jwt.claims', true)::json->>'sub',
email NAME NOT NULL DEFAULT current_setting('request.jwt.claims', true)::json->>'email',
user_id TEXT NOT NULL UNIQUE DEFAULT RIGHT(gen_random_uuid()::text, 12),
first TEXT NOT NULL DEFAULT current_setting('request.jwt.claims', true)::json->>'given_name',
last TEXT NOT NULL DEFAULT current_setting('request.jwt.claims', true)::json->>'family_name',
role NAME NOT NULL DEFAULT 'user_role' CHECK (length(role) < 512),
preferences JSONB NULL DEFAULT '{"email_notifications":true, "email_valid": true, "email_verified": true}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
connected_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Description
COMMENT ON TABLE
auth.users
IS 'Keycloak Oauth user, map user details from access token';
CREATE TRIGGER user_moddatetime
BEFORE UPDATE ON auth.users
FOR EACH ROW
EXECUTE PROCEDURE moddatetime (updated_at);
-- Description
COMMENT ON TRIGGER user_moddatetime
ON auth.users
IS 'Automatic update of updated_at on table modification';
create or replace function
auth.check_role_exists() returns trigger as $$
begin
@@ -263,6 +288,96 @@ begin
end;
$$ language plpgsql security definer;
---------------------------------------------------------------------------
-- API account Oauth functions
--
-- oauth is on your exposed schema
create or replace function
api.oauth() returns void as $$
declare
_exist boolean;
begin
-- Ensure we have the required key/value in the access token
if current_setting('request.jwt.claims', true)::json->>'sub' is null OR
current_setting('request.jwt.claims', true)::json->>'email' is null THEN
return;
end if;
-- check email exist
select exists( select email from auth.users
where id = current_setting('request.jwt.claims', true)::json->>'sub'
) INTO _exist;
if NOT FOUND then
RAISE WARNING 'Register new oauth user email:[%]', current_setting('request.jwt.claims', true)::json->>'email';
-- insert new user, default value from the oauth access token
INSERT INTO auth.users (role, preferences)
VALUES ('user_role', '{"email_notifications":true, "email_valid": true, "email_verified": true}');
end if;
end;
$$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
api.oauth
IS 'openid/oauth user register entry point';
create or replace function
api.oauth_vessel(in _mmsi text, in _name text) returns void as $$
declare
_exist boolean;
vessel_name text := _name;
vessel_mmsi text := _mmsi;
_vessel_id text := null;
vessel_rec record;
app_settings jsonb;
_user_id text := null;
begin
RAISE WARNING 'oauth_vessel:[%]', current_setting('user.email', true);
RAISE WARNING 'oauth_vessel:[%]', current_setting('request.jwt.claims', true)::json->>'email';
-- Ensure we have the required key/value in the access token
if current_setting('request.jwt.claims', true)::json->>'sub' is null OR
current_setting('request.jwt.claims', true)::json->>'email' is null THEN
return;
end if;
-- check email exist
select exists( select email from auth.accounts
where email = current_setting('request.jwt.claims', true)::json->>'email'
) INTO _exist;
if _exist is False then
RAISE WARNING 'Register new oauth user email:[%]', current_setting('request.jwt.claims', true)::json->>'email';
-- insert new user, default value from the oauth access token
INSERT INTO auth.users VALUES(DEFAULT) RETURNING user_id INTO _user_id;
-- insert new user to account table from the oauth access token
INSERT INTO auth.accounts (email, first, last, pass, user_id, role, preferences)
VALUES (current_setting('request.jwt.claims', true)::json->>'email',
current_setting('request.jwt.claims', true)::json->>'given_name',
current_setting('request.jwt.claims', true)::json->>'family_name',
current_setting('request.jwt.claims', true)::json->>'sub',
_user_id, 'user_role', '{"email_notifications":true, "email_valid": true, "email_verified": true}');
end if;
IF public.isnumeric(vessel_mmsi) IS False THEN
vessel_mmsi = NULL;
END IF;
-- check vessel exist
SELECT * INTO vessel_rec
FROM auth.vessels vessel
WHERE vessel.owner_email = current_setting('request.jwt.claims', true)::json->>'email';
IF vessel_rec IS NULL THEN
RAISE WARNING 'Register new vessel name:[%] mmsi:[%] for [%]', vessel_name, vessel_mmsi, current_setting('request.jwt.claims', true)::json->>'email';
INSERT INTO auth.vessels (owner_email, mmsi, name, role)
VALUES (current_setting('request.jwt.claims', true)::json->>'email', vessel_mmsi::NUMERIC, vessel_name, 'vessel_role') RETURNING vessel_id INTO _vessel_id;
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- set oauth user vessel_id attributes for token generation
PERFORM keycloak_py_fn(current_setting('request.jwt.claims', true)::json->>'sub'::TEXT, _vessel_id::TEXT, app_settings);
END IF;
end;
$$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
api.oauth_vessel
IS 'user and vessel register entry point from signalk plugin';
---------------------------------------------------------------------------
-- API vessel helper functions
-- register_vessel should be on your exposed schema
@@ -289,7 +404,7 @@ begin
IF vessel_rec IS NULL THEN
RAISE WARNING 'Register new vessel name:[%] mmsi:[%] for [%]', vessel_name, vessel_mmsi, vessel_email;
INSERT INTO auth.vessels (owner_email, mmsi, name, role)
VALUES (vessel_email, vessel_mmsi::NUMERIC, vessel_name, 'vessel_role') RETURNING vessel_id INTO _vessel_id;
VALUES (vessel_email, vessel_mmsi::NUMERIC, vessel_name, 'vessel_role') RETURNING vessel_id INTO _vessel_id;
vessel_rec.role := 'vessel_role';
vessel_rec.owner_email = vessel_email;
vessel_rec.vessel_id = _vessel_id;

View File

@@ -242,16 +242,17 @@ $vessel_details$
DECLARE
BEGIN
RETURN ( WITH tbl AS (
SELECT mmsi,ship_type,length,beam,height,plugin_version FROM api.metadata WHERE vessel_id = current_setting('vessel.id', false)
SELECT mmsi,ship_type,length,beam,height,plugin_version,platform FROM api.metadata WHERE vessel_id = current_setting('vessel.id', false)
)
SELECT json_build_object(
'ship_type', (SELECT ais.description FROM aistypes ais, tbl t WHERE t.ship_type = ais.id),
'country', (SELECT mid.country FROM mid, tbl t WHERE LEFT(cast(t.mmsi as text), 3)::NUMERIC = mid.id),
'alpha_2', (SELECT o.alpha_2 FROM mid m, iso3166 o, tbl t WHERE LEFT(cast(t.mmsi as text), 3)::NUMERIC = m.id AND m.country_id = o.id),
'length', t.ship_type,
'length', t.length,
'beam', t.beam,
'height', t.height,
'plugin_version', t.plugin_version)
'plugin_version', t.plugin_version,
'platform', t.platform)
FROM tbl t
);
END;
@@ -265,8 +266,8 @@ DROP VIEW IF EXISTS api.eventlogs_view;
CREATE VIEW api.eventlogs_view WITH (security_invoker=true,security_barrier=true) AS
SELECT pq.*
FROM public.process_queue pq
WHERE ref_id = current_setting('user.id', true)
OR ref_id = current_setting('vessel.id', true)
WHERE channel <> 'pre_logbook' AND (ref_id = current_setting('user.id', true)
OR ref_id = current_setting('vessel.id', true))
ORDER BY id ASC;
-- Description
COMMENT ON VIEW
@@ -315,7 +316,8 @@ BEGIN
RETURN False;
END IF;
IF _type ~ '^public_(logs|timelapse)$' AND _id IS NOT NULL THEN
RAISE WARNING '-> ispublic_fn _type [%], _id [%]', _type, _id;
IF _type ~ '^public_(logs|timelapse)$' AND _id > 0 THEN
WITH log as (
SELECT vessel_id from api.logbook l where l.id = _id
)

View File

@@ -22,7 +22,8 @@ COMMENT ON TABLE
IS 'Stores temporal otp code for up to 15 minutes';
-- Indexes
CREATE INDEX otp_pass_idx ON auth.otp (otp_pass);
CREATE INDEX otp_user_email_idx ON auth.otp (user_email);
-- Duplicate Indexes
--CREATE INDEX otp_user_email_idx ON auth.otp (user_email);
DROP FUNCTION IF EXISTS public.generate_uid_fn;
CREATE OR REPLACE FUNCTION public.generate_uid_fn(size INT) RETURNS TEXT

View File

@@ -40,6 +40,9 @@ grant execute on function api.telegram_otp_fn(text) to api_anonymous;
--grant execute on function api.generate_otp_fn(text) to api_anonymous;
grant execute on function api.ispublic_fn(text,text,integer) to api_anonymous;
grant execute on function api.timelapse_fn to api_anonymous;
grant execute on function api.stats_logs_fn to api_anonymous;
grant execute on function api.stats_stays_fn to api_anonymous;
grant execute on function api.status_fn to api_anonymous;
-- Allow read on TABLES on API schema
--GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata,api.stays_at TO api_anonymous;
-- Allow read on VIEWS on API schema
@@ -90,6 +93,7 @@ GRANT SELECT ON TABLE auth.accounts TO grafana_auth;
GRANT SELECT ON TABLE auth.vessels TO grafana_auth;
-- GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO grafana_auth;
GRANT EXECUTE ON FUNCTION public.citext_eq(citext, citext) TO grafana_auth;
GRANT ALL ON SCHEMA public TO grafana_auth; -- Important if grafana database in pg
-- User:
-- nologin, web api only
@@ -152,6 +156,9 @@ GRANT EXECUTE ON FUNCTION public.stay_in_progress_fn(text) to vessel_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA _timescaledb_internal TO vessel_role;
-- on metrics st_makepoint
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO vessel_role;
-- Oauth registration
GRANT EXECUTE ON FUNCTION api.oauth() TO vessel_role;
GRANT EXECUTE ON FUNCTION api.oauth_vessel(text,text) TO vessel_role;
--- Scheduler:
-- TODO: currently cron function are run as super user, switch to scheduler role.
@@ -278,6 +285,10 @@ CREATE POLICY api_scheduler_role ON api.stays TO scheduler
CREATE POLICY grafana_role ON api.stays TO grafana
USING (vessel_id = current_setting('vessel.id', false))
WITH CHECK (false);
-- Allow anonymous to select based on the vessel.id
CREATE POLICY api_anonymous_role ON api.stays TO api_anonymous
USING (vessel_id = current_setting('vessel.id', false))
WITH CHECK (false);
-- Be sure to enable row level security on the table
ALTER TABLE api.moorages ENABLE ROW LEVEL SECURITY;
@@ -301,6 +312,10 @@ CREATE POLICY api_scheduler_role ON api.moorages TO scheduler
CREATE POLICY grafana_role ON api.moorages TO grafana
USING (vessel_id = current_setting('vessel.id', false))
WITH CHECK (false);
-- Allow anonymous to select based on the vessel.id
CREATE POLICY api_anonymous_role ON api.moorages TO api_anonymous
USING (vessel_id = current_setting('vessel.id', false))
WITH CHECK (false);
-- Be sure to enable row level security on the table
ALTER TABLE auth.vessels ENABLE ROW LEVEL SECURITY;

View File

@@ -10,19 +10,23 @@ CREATE EXTENSION IF NOT EXISTS pg_cron; -- provides a simple cron-based job sche
-- TRUNCATE table jobs
--TRUNCATE TABLE cron.job CONTINUE IDENTITY RESTRICT;
-- Create a every 5 minutes or minute job cron_process_new_logbook_fn ??
SELECT cron.schedule('cron_new_logbook', '*/5 * * * *', 'select public.cron_process_new_logbook_fn()');
-- Create a every 5 minutes or minute job cron_process_pre_logbook_fn ??
SELECT cron.schedule('cron_pre_logbook', '*/5 * * * *', 'select public.cron_process_pre_logbook_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_pre_logbook';
-- Create a every 6 minutes or minute job cron_process_new_logbook_fn ??
SELECT cron.schedule('cron_new_logbook', '*/6 * * * *', 'select public.cron_process_new_logbook_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_logbook';
-- Create a every 5 minute job cron_process_new_stay_fn
SELECT cron.schedule('cron_new_stay', '*/6 * * * *', 'select public.cron_process_new_stay_fn()');
-- Create a every 7 minute job cron_process_new_stay_fn
SELECT cron.schedule('cron_new_stay', '*/7 * * * *', 'select public.cron_process_new_stay_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_stay';
-- Create a every 6 minute job cron_process_new_moorage_fn, delay from stay to give time to generate geo reverse location, eg: name
--SELECT cron.schedule('cron_new_moorage', '*/7 * * * *', 'select public.cron_process_new_moorage_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_moorage';
-- Create a every 10 minute job cron_process_monitor_offline_fn
-- Create a every 11 minute job cron_process_monitor_offline_fn
SELECT cron.schedule('cron_monitor_offline', '*/11 * * * *', 'select public.cron_process_monitor_offline_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_monitor_offline';
@@ -42,41 +46,51 @@ SELECT cron.schedule('cron_monitor_online', '*/10 * * * *', 'select public.cron_
--SELECT cron.schedule('cron_new_account_otp', '*/6 * * * *', 'select public.cron_process_new_account_otp_validation_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_account_otp';
-- Create a every 5 minute job cron_process_grafana_fn
SELECT cron.schedule('cron_grafana', '*/5 * * * *', 'select public.cron_process_grafana_fn()');
-- Create a every 5 minute job cron_process_windy_fn
SELECT cron.schedule('cron_windy', '*/5 * * * *', 'select public.cron_windy_fn()');
-- Notification
-- Create a every 1 minute job cron_process_new_notification_queue_fn, new_account, new_vessel, _new_account_otp
SELECT cron.schedule('cron_new_notification', '*/2 * * * *', 'select public.cron_process_new_notification_fn()');
SELECT cron.schedule('cron_new_notification', '*/1 * * * *', 'select public.cron_process_new_notification_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_notification';
-- Maintenance
-- Vacuum database at At 01:01 on Sunday.
SELECT cron.schedule('cron_vacuum', '1 1 * * 0', 'VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.logbook,api.stays,api.moorages,api.metadata,api.metrics;');
-- Remove all jobs log at At 02:02 on Sunday.
-- Vacuum database schema api at "At 01:31 on Sunday."
SELECT cron.schedule('cron_vacuum_api', '31 1 * * 0', 'VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.logbook,api.stays,api.moorages,api.metadata,api.metrics;');
-- Vacuum database schema auth at "At 01:01 on Sunday."
SELECT cron.schedule('cron_vacuum_auth', '1 1 * * 0', 'VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) auth.accounts,auth.vessels,auth.otp;');
-- Remove old jobs log at "At 02:02 on Sunday."
SELECT cron.schedule('job_run_details_cleanup', '2 2 * * 0', 'select public.job_run_details_cleanup_fn()');
-- Rebuilding indexes at first day of each month at 23:01.”
SELECT cron.schedule('cron_reindex', '1 23 1 * *', 'REINDEX TABLE api.logbook; REINDEX TABLE api.stays; REINDEX TABLE api.moorages; REINDEX TABLE api.metadata; REINDEX TABLE api.metrics;');
-- Rebuilding indexes schema api at "first day of each month at 23:15."
SELECT cron.schedule('cron_reindex_api', '15 23 1 * *', 'REINDEX TABLE CONCURRENTLY api.logbook; REINDEX TABLE CONCURRENTLY api.stays; REINDEX TABLE CONCURRENTLY api.moorages; REINDEX TABLE CONCURRENTLY api.metadata;');
-- Rebuilding indexes schema auth at "first day of each month at 23:01."
SELECT cron.schedule('cron_reindex_auth', '1 23 1 * *', 'REINDEX TABLE CONCURRENTLY auth.accounts; REINDEX TABLE CONCURRENTLY auth.vessels; REINDEX TABLE CONCURRENTLY auth.otp;');
-- Any other maintenance require?
-- OTP
-- Create a every 15 minute job cron_process_prune_otp_fn
SELECT cron.schedule('cron_prune_otp', '*/15 * * * *', 'select public.cron_process_prune_otp_fn()');
-- Create a every 15 minute job cron_prune_otp_fn
SELECT cron.schedule('cron_prune_otp', '*/15 * * * *', 'select public.cron_prune_otp_fn()');
-- Alerts
-- Create a every 11 minute job cron_process_alerts_fn
--SELECT cron.schedule('cron_alerts', '*/11 * * * *', 'select public.cron_process_alerts_fn()');
-- Create a every 11 minute job cron_alerts_fn
SELECT cron.schedule('cron_alerts', '*/11 * * * *', 'select public.cron_alerts_fn()');
-- Notifications/Reminders of no vessel & no metadata & no activity
-- At 08:05 on Sunday.
-- At 08:05 on every 4th day-of-month if it's on Sunday.
SELECT cron.schedule('cron_no_vessel', '5 8 */4 * 0', 'select public.cron_process_no_vessel_fn()');
SELECT cron.schedule('cron_no_metadata', '5 8 */4 * 0', 'select public.cron_process_no_metadata_fn()');
SELECT cron.schedule('cron_no_activity', '5 8 */4 * 0', 'select public.cron_process_no_activity_fn()');
SELECT cron.schedule('cron_no_vessel', '5 8 */4 * 0', 'select public.cron_no_vessel_fn()');
SELECT cron.schedule('cron_no_metadata', '5 8 */4 * 0', 'select public.cron_no_metadata_fn()');
SELECT cron.schedule('cron_no_activity', '5 8 */4 * 0', 'select public.cron_no_activity_fn()');
-- Cron job settings
UPDATE cron.job SET database = 'signalk';
UPDATE cron.job SET username = 'username'; -- TODO update to scheduler, pending process_queue update
--UPDATE cron.job SET username = 'username' where jobname = 'cron_vacuum'; -- TODO Update to superuser for vaccuum permissions
UPDATE cron.job SET username = current_user; -- TODO update to scheduler, pending process_queue update
--UPDATE cron.job SET username = 'username' where jobname = 'cron_vacuum'; -- TODO Update to superuser for vacuum permissions
UPDATE cron.job SET nodename = '/var/run/postgresql/'; -- VS default localhost ??
UPDATE cron.job SET database = 'postgresql' WHERE jobname = 'job_run_details_cleanup_fn';
UPDATE cron.job SET database = 'postgres' WHERE jobname = 'job_run_details_cleanup';
-- check job lists
SELECT * FROM cron.job;
-- unschedule by job id

View File

@@ -0,0 +1,457 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration January 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Force timezone, just in case'
set timezone to 'UTC';
COMMENT ON FUNCTION
public.cron_process_new_moorage_fn
IS 'Deprecated, init by pg_cron to check for new moorage pending update, if so perform process_moorage_queue_fn';
DROP FUNCTION IF EXISTS reverse_geoip_py_fn;
CREATE OR REPLACE FUNCTION reverse_geoip_py_fn(IN _ip TEXT) RETURNS JSONB
AS $reverse_geoip_py$
"""
Return ipapi.co ip details
"""
import requests
import json
# requests
url = f'https://ipapi.co/{_ip}/json/'
r = requests.get(url)
#print(r.text)
#plpy.notice('IP [{}] [{}]'.format(_ip, r.status_code))
if r.status_code == 200:
#plpy.notice('Got [{}] [{}]'.format(r.text, r.status_code))
return r.json()
else:
plpy.error('Failed to get ip details')
return {}
$reverse_geoip_py$ IMMUTABLE strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.reverse_geoip_py_fn
IS 'Retrieve reverse geo IP location via ipapi.co using plpython3u';
DROP FUNCTION IF EXISTS overpass_py_fn;
CREATE OR REPLACE FUNCTION overpass_py_fn(IN lon NUMERIC, IN lat NUMERIC,
OUT geo JSONB) RETURNS JSONB
AS $overpass_py$
"""
Return https://overpass-turbo.eu seamark details within 400m
https://overpass-turbo.eu/s/1EaG
https://wiki.openstreetmap.org/wiki/Key:seamark:type
"""
import requests
import json
import urllib.parse
headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com'}
payload = """
[out:json][timeout:20];
is_in({0},{1})->.result_areas;
(
area.result_areas["seamark:type"~"(mooring|harbour)"][~"^seamark:.*:category$"~"."];
area.result_areas["leisure"="marina"][~"name"~"."];
);
out tags;
nwr(around:400.0,{0},{1})->.all;
(
nwr.all["seamark:type"~"(mooring|harbour)"][~"^seamark:.*:category$"~"."];
nwr.all["seamark:type"~"(anchorage|anchor_berth|berth)"];
nwr.all["leisure"="marina"];
nwr.all["natural"~"(bay|beach)"];
);
out tags;
""".format(lat, lon)
data = urllib.parse.quote(payload, safe="");
url = f'https://overpass-api.de/api/interpreter?data={data}'.format(data)
r = requests.get(url, headers)
#print(r.text)
#plpy.notice(url)
plpy.notice('overpass-api coord lon[{}] lat[{}] [{}]'.format(lon, lat, r.status_code))
if r.status_code == 200 and "elements" in r.json():
r_dict = r.json()
plpy.notice('overpass-api Got [{}]'.format(r_dict["elements"]))
if r_dict["elements"]:
if "tags" in r_dict["elements"][0] and r_dict["elements"][0]["tags"]:
return r_dict["elements"][0]["tags"]; # return the first element
return {}
else:
plpy.notice('overpass-api Failed to get overpass-api details')
return {}
$overpass_py$ IMMUTABLE strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.overpass_py_fn
IS 'Return https://overpass-turbo.eu seamark details within 400m using plpython3u';
CREATE OR REPLACE FUNCTION get_app_settings_fn(OUT app_settings jsonb)
RETURNS jsonb
AS $get_app_settings$
DECLARE
BEGIN
SELECT
jsonb_object_agg(name, value) INTO app_settings
FROM
public.app_settings
WHERE
name LIKE 'app.email%'
OR name LIKE 'app.pushover%'
OR name LIKE 'app.url'
OR name LIKE 'app.telegram%'
OR name LIKE 'app.grafana_admin_uri'
OR name LIKE 'app.keycloak_uri';
END;
$get_app_settings$
LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION keycloak_auth_py_fn(IN _v_id TEXT,
IN _user JSONB, IN app JSONB) RETURNS JSONB
AS $keycloak_auth_py$
"""
Addkeycloak user
"""
import requests
import json
import urllib.parse
safe_uri = host = user = pwd = None
if 'app.keycloak_uri' in app and app['app.keycloak_uri']:
#safe_uri = urllib.parse.quote(app['app.keycloak_uri'], safe=':/?&=')
_ = urllib.parse.urlparse(app['app.keycloak_uri'])
host = _.netloc.split('@')[-1]
user = _.netloc.split(':')[0]
pwd = _.netloc.split(':')[1].split('@')[0]
else:
plpy.error('Error no keycloak_uri defined, check app settings')
return none
if not host or not user or not pwd:
plpy.error('Error parsing keycloak_uri, check app settings')
return None
if not 'email' in _user and _user['email']:
plpy.error('Error parsing user email, check user settings')
return none
if not _v_id:
plpy.error('Error parsing vessel_id')
return none
_headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com'}
_payload = {'client_id':'admin-cli','grant_type':'password','username':user,'password':pwd}
url = f'{_.scheme}://{host}/realms/master/protocol/openid-connect/token'.format(_.scheme, host)
r = requests.post(url, headers=_headers, data=_payload, timeout=(5, 60))
#print(r.text)
#plpy.notice(url)
if r.status_code == 200 and 'access_token' in r.json():
response = r.json()
plpy.notice(response)
_headers['Authorization'] = 'Bearer '+ response['access_token']
_headers['Content-Type'] = 'application/json'
url = f'{_.scheme}://{host}/admin/realms/postgsail/users'.format(_.scheme, host)
_payload = {
"enabled": "true",
"email": _user['email'],
"firstName": _user['recipient'],
"attributes": {"vessel_id": _v_id},
"emailVerified": True,
"requiredActions":["UPDATE_PROFILE", "UPDATE_PASSWORD"]
}
plpy.notice(_payload)
data = json.dumps(_payload)
r = requests.post(url, headers=_headers, data=data, timeout=(5, 60))
if r.status_code != 201:
#print("Error creating user: {status}".format(status=r.status_code))
plpy.error(f'Error creating user: {user} {status}'.format(user=_payload['email'], status=r.status_code))
return None
else:
#print("Created user : {u}]".format(u=_payload['email']))
plpy.notice('Created user : {u} {t}, {l}'.format(u=_payload['email'], t=r.text, l=r.headers['location']))
user_url = "{user_url}/execute-actions-email".format(user_url=r.headers['location'])
_payload = ["UPDATE_PASSWORD"]
plpy.notice(_payload)
data = json.dumps(_payload)
r = requests.put(user_url, headers=_headers, data=data, timeout=(5, 60))
if r.status_code != 204:
plpy.error('Error execute-actions-email: {u} {s}'.format(u=_user['email'], s=r.status_code))
else:
plpy.notice('execute-actions-email: {u} {s}'.format(u=_user['email'], s=r.status_code))
return None
else:
plpy.error(f'Error getting admin access_token: {status}'.format(status=r.status_code))
return None
$keycloak_auth_py$ strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.keycloak_auth_py_fn
IS 'Return set oauth user attribute into keycloak using plpython3u';
CREATE OR REPLACE FUNCTION keycloak_py_fn(IN user_id TEXT, IN vessel_id TEXT,
IN app JSONB) RETURNS JSONB
AS $keycloak_py$
"""
Add vessel_id user attribute to keycloak user {user_id}
"""
import requests
import json
import urllib.parse
safe_uri = host = user = pwd = None
if 'app.keycloak_uri' in app and app['app.keycloak_uri']:
#safe_uri = urllib.parse.quote(app['app.keycloak_uri'], safe=':/?&=')
_ = urllib.parse.urlparse(app['app.keycloak_uri'])
host = _.netloc.split('@')[-1]
user = _.netloc.split(':')[0]
pwd = _.netloc.split(':')[1].split('@')[0]
else:
plpy.error('Error no keycloak_uri defined, check app settings')
return None
if not host or not user or not pwd:
plpy.error('Error parsing keycloak_uri, check app settings')
return None
_headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com'}
_payload = {'client_id':'admin-cli','grant_type':'password','username':user,'password':pwd}
url = f'{_.scheme}://{host}/realms/master/protocol/openid-connect/token'.format(_.scheme, host)
r = requests.post(url, headers=_headers, data=_payload, timeout=(5, 60))
#print(r.text)
#plpy.notice(url)
if r.status_code == 200 and 'access_token' in r.json():
response = r.json()
plpy.notice(response)
_headers['Authorization'] = 'Bearer '+ response['access_token']
_headers['Content-Type'] = 'application/json'
_payload = { 'attributes': {'vessel_id': vessel_id} }
url = f'{keycloak_uri}/admin/realms/postgsail/users/{user_id}'.format(keycloak_uri,user_id)
#plpy.notice(url)
#plpy.notice(_payload)
data = json.dumps(_payload)
r = requests.put(url, headers=_headers, data=data, timeout=(5, 60))
if r.status_code != 204:
plpy.notice("Error updating user: {status} [{text}]".format(
status=r.status_code, text=r.text))
return None
else:
plpy.notice("Updated user : {user} [{text}]".format(user=user_id, text=r.text))
else:
plpy.notice(f'Error getting admin access_token: {status} [{text}]'.format(
status=r.status_code, text=r.text))
return None
$keycloak_py$ strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
UPDATE public.email_templates
SET pushover_message='Congratulations!
You unlocked Grafana dashboard.
See more details at https://app.openplotter.cloud
',email_content='Hello __RECIPIENT__,
Congratulations! You unlocked Grafana dashboard.
See more details at https://app.openplotter.cloud
Happy sailing!
Francois'
WHERE "name"='grafana';
CREATE OR REPLACE FUNCTION public.cron_process_grafana_fn()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE
process_rec record;
data_rec record;
app_settings jsonb;
user_settings jsonb;
BEGIN
-- We run grafana provisioning only after the first received vessel metadata
-- Check for new vessel metadata pending grafana provisioning
RAISE NOTICE 'cron_process_grafana_fn';
FOR process_rec in
SELECT * from process_queue
where channel = 'grafana' and processed is null
order by stored asc
LOOP
RAISE NOTICE '-> cron_process_grafana_fn [%]', process_rec.payload;
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Get vessel details base on metadata id
SELECT * INTO data_rec
FROM api.metadata m, auth.vessels v
WHERE m.id = process_rec.payload::INTEGER
AND m.vessel_id = v.vessel_id;
-- as we got data from the vessel we can do the grafana provisioning.
PERFORM grafana_py_fn(data_rec.name, data_rec.vessel_id, data_rec.owner_email, app_settings);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(data_rec.vessel_id::TEXT);
RAISE DEBUG '-> DEBUG cron_process_grafana_fn get_user_settings_from_vesselid_fn [%]', user_settings;
-- add user in keycloak
PERFORM keycloak_auth_py_fn(data_rec.vessel_id, user_settings, app_settings);
-- Send notification
PERFORM send_notification_fn('grafana'::TEXT, user_settings::JSONB);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_grafana_fn updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$function$
;
COMMENT ON FUNCTION public.cron_process_grafana_fn() IS 'init by pg_cron to check for new vessel pending grafana provisioning, if so perform grafana_py_fn';
-- DROP FUNCTION public.grafana_py_fn(text, text, text, jsonb);
CREATE OR REPLACE FUNCTION public.grafana_py_fn(_v_name text, _v_id text, _u_email text, app jsonb)
RETURNS void
TRANSFORM FOR TYPE jsonb
LANGUAGE plpython3u
AS $function$
"""
https://grafana.com/docs/grafana/latest/developers/http_api/
Create organization base on vessel name
Create user base on user email
Add user to organization
Add data_source to organization
Add dashboard to organization
Update organization preferences
"""
import requests
import json
import re
grafana_uri = None
if 'app.grafana_admin_uri' in app and app['app.grafana_admin_uri']:
grafana_uri = app['app.grafana_admin_uri']
else:
plpy.error('Error no grafana_admin_uri defined, check app settings')
return None
b_name = None
if not _v_name:
b_name = _v_id
else:
b_name = _v_name
# add vessel org
headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com',
'Accept': 'application/json', 'Content-Type': 'application/json'}
path = 'api/orgs'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data_dict = {'name':b_name}
data = json.dumps(data_dict)
r = requests.post(url, data=data, headers=headers)
#print(r.text)
plpy.notice(r.json())
if r.status_code == 200 and "orgId" in r.json():
org_id = r.json()['orgId']
else:
plpy.error('Error grafana add vessel org %', r.json())
return none
# add user to vessel org
path = 'api/admin/users'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data_dict = {'orgId':org_id, 'email':_u_email, 'password':'asupersecretpassword'}
data = json.dumps(data_dict)
r = requests.post(url, data=data, headers=headers)
#print(r.text)
plpy.notice(r.json())
if r.status_code == 200 and "id" in r.json():
user_id = r.json()['id']
else:
plpy.error('Error grafana add user to vessel org')
return
# read data_source
path = 'api/datasources/1'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
r = requests.get(url, headers=headers)
#print(r.text)
plpy.notice(r.json())
data_source = r.json()
data_source['id'] = 0
data_source['orgId'] = org_id
data_source['uid'] = "ds_" + _v_id
data_source['name'] = "ds_" + _v_id
data_source['secureJsonData'] = {}
data_source['secureJsonData']['password'] = 'mysecretpassword'
data_source['readOnly'] = True
del data_source['secureJsonFields']
# add data_source to vessel org
path = 'api/datasources'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data = json.dumps(data_source)
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.post(url, data=data, headers=headers)
plpy.notice(r.json())
del headers['X-Grafana-Org-Id']
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana add data_source to vessel org')
return
dashboards_tpl = [ 'pgsail_tpl_electrical', 'pgsail_tpl_logbook', 'pgsail_tpl_monitor', 'pgsail_tpl_rpi', 'pgsail_tpl_solar', 'pgsail_tpl_weather', 'pgsail_tpl_home']
for dashboard in dashboards_tpl:
# read dashboard template by uid
path = 'api/dashboards/uid'
url = f'{grafana_uri}/{path}/{dashboard}'.format(grafana_uri,path,dashboard)
if 'X-Grafana-Org-Id' in headers:
del headers['X-Grafana-Org-Id']
r = requests.get(url, headers=headers)
plpy.notice(r.json())
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana read dashboard template')
return
new_dashboard = r.json()
del new_dashboard['meta']
new_dashboard['dashboard']['version'] = 0
new_dashboard['dashboard']['id'] = 0
new_uid = re.sub(r'pgsail_tpl_(.*)', r'postgsail_\1', new_dashboard['dashboard']['uid'])
new_dashboard['dashboard']['uid'] = f'{new_uid}_{_v_id}'.format(new_uid,_v_id)
# add dashboard to vessel org
path = 'api/dashboards/db'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data = json.dumps(new_dashboard)
new_data = data.replace('PCC52D03280B7034C', data_source['uid'])
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.post(url, data=new_data, headers=headers)
plpy.notice(r.json())
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana add dashboard to vessel org')
return
# Update Org Prefs
path = 'api/org/preferences'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
home_dashboard = {}
home_dashboard['timezone'] = 'utc'
home_dashboard['homeDashboardUID'] = f'postgsail_home_{_v_id}'.format(_v_id)
data = json.dumps(home_dashboard)
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.patch(url, data=data, headers=headers)
plpy.notice(r.json())
if r.status_code != 200:
plpy.error('Error grafana update org preferences')
return
plpy.notice('Done')
$function$
;
COMMENT ON FUNCTION public.grafana_py_fn(text, text, text, jsonb) IS 'Grafana Organization,User,data_source,dashboards provisioning via HTTP API using plpython3u';
UPDATE public.app_settings
SET value='0.6.1'
WHERE "name"='app.version';

View File

@@ -0,0 +1,919 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration February 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Update email_templates
--INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
-- VALUES ('windy','PostgSail Windy Weather station',E'Hello __RECIPIENT__,\nCongratulations! Your boat is now a Windy Weather station.\nSee more details at __APP_URL__/windy\nHappy sailing!\nFrancois','PostgSail Windy!',E'Congratulations!\nYour boat is now a Windy Weather station.\nSee more details at __APP_URL__/windy\n');
--INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
--VALUES ('alert','PostgSail Alert',E'Hello __RECIPIENT__,\nWe detected an alert __ALERT__.\nSee more details at __APP_URL__\nStay safe.\nFrancois','PostgSail Alert!',E'Congratulations!\nWe detected an alert __ALERT__.\n');
INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
VALUES ('windy_error','PostgSail Windy Weather station Error',E'Hello __RECIPIENT__,\nSorry!We could not convert your boat into a Windy Personal Weather Station due to missing data (temp or wind).\nWindy Personal Weather Station is now disable.','PostgSail Windy error!',E'Sorry!\nWe could not convert your boat into a Windy Personal Weather Station.');
-- Update app_settings
CREATE OR REPLACE FUNCTION public.get_app_settings_fn(OUT app_settings jsonb)
RETURNS jsonb
AS $get_app_settings$
DECLARE
BEGIN
SELECT
jsonb_object_agg(name, value) INTO app_settings
FROM
public.app_settings
WHERE
name LIKE 'app.email%'
OR name LIKE 'app.pushover%'
OR name LIKE 'app.url'
OR name LIKE 'app.telegram%'
OR name LIKE 'app.grafana_admin_uri'
OR name LIKE 'app.keycloak_uri'
OR name LIKE 'app.windy_apikey';
END;
$get_app_settings$
LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION public.get_user_settings_from_vesselid_fn(
IN vesselid TEXT,
OUT user_settings JSONB
) RETURNS JSONB
AS $get_user_settings_from_vesselid$
DECLARE
BEGIN
-- If vessel_id is not NULL
IF vesselid IS NULL OR vesselid = '' THEN
RAISE WARNING '-> get_user_settings_from_vesselid_fn invalid input %', vesselid;
END IF;
SELECT
json_build_object(
'boat' , v.name,
'recipient', a.first,
'email', v.owner_email,
'settings', a.preferences
) INTO user_settings
FROM auth.accounts a, auth.vessels v, api.metadata m
WHERE m.vessel_id = v.vessel_id
AND m.vessel_id = vesselid
AND a.email = v.owner_email;
PERFORM set_config('user.email', user_settings->>'email'::TEXT, false);
PERFORM set_config('user.recipient', user_settings->>'recipient'::TEXT, false);
END;
$get_user_settings_from_vesselid$ LANGUAGE plpgsql;
-- Create Windy PWS integration
CREATE OR REPLACE FUNCTION public.windy_pws_py_fn(IN metric JSONB,
IN _user JSONB, IN app JSONB) RETURNS JSONB
AS $windy_pws_py$
"""
Send environment data from boat instruments to Windy as a Personal Weather Station (PWS)
https://community.windy.com/topic/8168/report-your-weather-station-data-to-windy
"""
import requests
import json
import decimal
if not 'app.windy_apikey' in app and not app['app.windy_apikey']:
plpy.error('Error no windy_apikey defined, check app settings')
return none
if not 'station' in metric and not metric['station']:
plpy.error('Error no metrics defined')
return none
if not 'temp' in metric and not metric['temp']:
plpy.error('Error no metrics defined')
return none
if not _user:
plpy.error('Error no user defined, check user settings')
return none
_headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com', 'Content-Type': 'application/json'}
_payload = {
'stations': [
{ 'station': int(decimal.Decimal(metric['station'])),
'name': metric['name'],
'shareOption': 'Open',
'type': 'SignalK PostgSail Plugin',
'provider': 'PostgSail',
'url': 'https://iot.openplotter.cloud/{name}/monitoring'.format(name=metric['name']),
'lat': float(decimal.Decimal(metric['lat'])),
'lon': float(decimal.Decimal(metric['lon'])),
'elevation': 1 }
],
'observations': [
{ 'station': int(decimal.Decimal(metric['station'])),
'temp': float(decimal.Decimal(metric['temp'])),
'wind': round(float(decimal.Decimal(metric['wind']))),
'gust': round(float(decimal.Decimal(metric['gust']))),
'winddir': int(decimal.Decimal(metric['winddir'])),
'pressure': int(decimal.Decimal(metric['pressure'])),
'rh': float(decimal.Decimal(metric['rh'])) }
]}
#print(_payload)
#plpy.notice(_payload)
data = json.dumps(_payload)
api_url = 'https://stations.windy.com/pws/update/{api_key}'.format(api_key=app['app.windy_apikey'])
r = requests.post(api_url, data=data, headers=_headers, timeout=(5, 60))
#print(r.text)
#plpy.notice(api_url)
if r.status_code == 200:
#print('Data sent successfully!')
plpy.notice('Data sent successfully to Windy!')
#plpy.notice(api_url)
if not 'windy' in _user['settings']:
api_url = 'https://stations.windy.com/pws/station/{api_key}/{station}'.format(api_key=app['app.windy_apikey'], station=metric['station'])
#print(r.text)
#plpy.notice(api_url)
r = requests.get(api_url, timeout=(5, 60))
if r.status_code == 200:
#print('Windy Personal Weather Station created successfully in Windy Stations!')
plpy.notice('Windy Personal Weather Station created successfully in Windy Stations!')
return r.json()
else:
plpy.error(f'Failed to gather PWS details. Status code: {r.status_code}')
else:
plpy.error(f'Failed to send data. Status code: {r.status_code}')
#print(f'Failed to send data. Status code: {r.status_code}')
#print(r.text)
return {}
$windy_pws_py$ strict TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.windy_pws_py_fn
IS 'Forward vessel data to Windy as a Personal Weather Station using plpython3u';
CREATE OR REPLACE FUNCTION public.cron_windy_fn() RETURNS void AS $cron_windy$
DECLARE
windy_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ;
metric_rec record;
windy_metric jsonb;
app_settings jsonb;
user_settings jsonb;
windy_pws jsonb;
BEGIN
-- Check for new observations pending update
RAISE NOTICE 'cron_windy_fn';
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Find users with Windy active and with an active vessel
-- Map account id to Windy Station ID
FOR windy_rec in
SELECT
a.id,a.email,v.vessel_id,v.name,
COALESCE((a.preferences->'windy_last_metric')::TEXT, default_last_metric::TEXT) as last_metric
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'public_windy')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_windy_fn for [%]', windy_rec;
PERFORM set_config('vessel.id', windy_rec.vessel_id, false);
--RAISE WARNING 'public.cron_process_windy_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(windy_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_windy_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last windy_last_metric avg by 5 minutes
-- TODO json_agg to send all data in once, but issue with py jsonb transformation decimal.
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.outside.temperature')::numeric) AS temperature,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.outside.relativeHumidity')::numeric) AS rh,
avg((m.metrics->'environment.wind.directionTrue')::numeric) AS winddir,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
max((m.metrics->'environment.wind.speedTrue')::numeric) AS gust,
last(latitude, time) AS lat,
last(longitude, time) AS lng
FROM api.metrics m
WHERE vessel_id = windy_rec.vessel_id
AND m.time >= windy_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_windy_fn checking metrics [%]', metric_rec;
IF metric_rec.wind IS NULL OR metric_rec.temperature IS NULL THEN
-- Ignore when there is no metrics
-- Send notification
PERFORM send_notification_fn('windy_error'::TEXT, user_settings::JSONB);
-- Disable windy
PERFORM api.update_user_preferences_fn('{public_windy}'::TEXT, 'false'::TEXT);
RETURN;
END IF;
-- https://community.windy.com/topic/8168/report-your-weather-station-data-to-windy
-- temp from Kelvin to Celsius
-- winddir from radiant to Degrees
-- rh from ratio to percentage
SELECT jsonb_build_object(
'dateutc', metric_rec.time_bucket,
'station', windy_rec.id,
'name', windy_rec.name,
'lat', metric_rec.lat,
'lon', metric_rec.lng,
'wind', metric_rec.wind,
'gust', metric_rec.gust,
'pressure', metric_rec.pressure,
'winddir', radiantToDegrees(metric_rec.winddir::numeric),
'temp', kelvinToCel(metric_rec.temperature::numeric),
'rh', valToPercent(metric_rec.rh::numeric)
) INTO windy_metric;
RAISE NOTICE '-> cron_windy_fn checking windy_metrics [%]', windy_metric;
SELECT windy_pws_py_fn(windy_metric, user_settings, app_settings) into windy_pws;
RAISE NOTICE '-> cron_windy_fn Windy PWS [%]', ((windy_pws->'header')::JSONB ? 'id');
IF NOT((user_settings->'settings')::JSONB ? 'windy') and ((windy_pws->'header')::JSONB ? 'id') then
RAISE NOTICE '-> cron_windy_fn new Windy PWS [%]', (windy_pws->'header')::JSONB->>'id';
-- Send metrics to Windy
PERFORM api.update_user_preferences_fn('{windy}'::TEXT, ((windy_pws->'header')::JSONB->>'id')::TEXT);
-- Send notification
PERFORM send_notification_fn('windy'::TEXT, user_settings::JSONB);
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{windy_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$cron_windy$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_windy_fn
IS 'init by pg_cron to create (or update) station and uploading observations to Windy Personal Weather Station observations';
CREATE OR REPLACE FUNCTION public.cron_alerts_fn() RETURNS void AS $$
DECLARE
alert_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ;
metric_rec record;
app_settings JSONB;
user_settings JSONB;
alerting JSONB;
_alarms JSONB;
alarms TEXT;
alert_default JSONB := '{
"low_pressure_threshold": 990,
"high_wind_speed_threshold": 30,
"low_water_depth_threshold": 1,
"min_notification_interval": 6,
"high_pressure_drop_threshold": 12,
"low_battery_charge_threshold": 90,
"low_battery_voltage_threshold": 12.5,
"low_water_temperature_threshold": 10,
"low_indoor_temperature_threshold": 7,
"low_outdoor_temperature_threshold": 3
}';
BEGIN
-- Check for new event notification pending update
RAISE NOTICE 'cron_alerts_fn';
FOR alert_rec in
SELECT
a.user_id,a.email,v.vessel_id,
COALESCE((a.preferences->'alert_last_metric')::TEXT, default_last_metric::TEXT) as last_metric,
(alert_default || (a.preferences->'alerting')::JSONB) as alerting,
(a.preferences->'alarms')::JSONB as alarms
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'alerting'->'enabled')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_alerts_fn for [%]', alert_rec;
PERFORM set_config('vessel.id', alert_rec.vessel_id, false);
PERFORM set_config('user.email', alert_rec.email, false);
--RAISE WARNING 'public.cron_process_alert_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(alert_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_alerts_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last last_metric avg by 5 minutes
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.inside.temperature')::numeric) AS intemp,
avg((m.metrics->'environment.outside.temperature')::numeric) AS outtemp,
avg((m.metrics->'environment.water.temperature')::numeric) AS wattemp,
avg((m.metrics->'environment.depth.belowTransducer')::numeric) AS watdepth,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
avg((m.metrics->'electrical.batteries.House.voltage')::numeric) AS voltage,
avg((m.metrics->'electrical.batteries.House.capacity.stateOfCharge')::numeric) AS charge
FROM api.metrics m
WHERE vessel_id = alert_rec.vessel_id
AND m.time >= alert_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_alerts_fn checking metrics [%]', metric_rec;
RAISE NOTICE '-> cron_alerts_fn checking alerting [%]', alert_rec.alerting;
--RAISE NOTICE '-> cron_alerts_fn checking debug [%] [%]', kelvinToCel(metric_rec.intemp), (alert_rec.alerting->'low_indoor_temperature_threshold');
IF kelvinToCel(metric_rec.intemp) < (alert_rec.alerting->'low_indoor_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_indoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_indoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.intemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.intemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold';
END IF;
IF kelvinToCel(metric_rec.outtemp) < (alert_rec.alerting->'low_outdoor_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_outdoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_outdoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.outtemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.outtemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold';
END IF;
IF kelvinToCel(metric_rec.wattemp) < (alert_rec.alerting->'low_water_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.wattemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_temperature_threshold value:'|| kelvinToCel(metric_rec.wattemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold';
END IF;
IF metric_rec.watdepth < (alert_rec.alerting->'low_water_depth_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_depth_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_depth_threshold": {"value": '|| metric_rec.watdepth ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_depth_threshold value:'|| metric_rec.watdepth ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold';
END IF;
if metric_rec.pressure < (alert_rec.alerting->'high_pressure_drop_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_pressure_drop_threshold'->>'date') IS NULL) OR
(((_alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_pressure_drop_threshold": {"value": '|| metric_rec.pressure ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_pressure_drop_threshold value:'|| metric_rec.pressure ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold';
END IF;
IF metric_rec.wind > (alert_rec.alerting->'high_wind_speed_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_wind_speed_threshold'->>'date') IS NULL) OR
(((_alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_wind_speed_threshold": {"value": '|| metric_rec.wind ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_wind_speed_threshold value:'|| metric_rec.wind ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold';
END IF;
if metric_rec.voltage < (alert_rec.alerting->'low_battery_voltage_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = 'lacroix.francois@gmail.com';
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_voltage_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_voltage_threshold": {"value": '|| metric_rec.voltage ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_voltage_threshold value:'|| metric_rec.voltage ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold';
END IF;
if (metric_rec.charge*100) < (alert_rec.alerting->'low_battery_charge_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_charge_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_charge_threshold": {"value": '|| (metric_rec.charge*100) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_charge_threshold value:'|| (metric_rec.charge*100) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold';
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{alert_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_alerts_fn
IS 'init by pg_cron to check for alerts';
-- CRON for no vessel notification
CREATE FUNCTION public.cron_no_vessel_fn() RETURNS void AS $no_vessel$
DECLARE
no_vessel record;
user_settings jsonb;
BEGIN
-- Check for user with no vessel register
RAISE NOTICE 'cron_no_vessel_fn';
FOR no_vessel in
SELECT a.user_id,a.email,a.first
FROM auth.accounts a
WHERE NOT EXISTS (
SELECT *
FROM auth.vessels v
WHERE v.owner_email = a.email)
LOOP
RAISE NOTICE '-> cron_no_vessel_rec_fn for [%]', no_vessel;
SELECT json_build_object('email', no_vessel.email, 'recipient', no_vessel.first) into user_settings;
RAISE NOTICE '-> debug cron_no_vessel_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_vessel'::TEXT, user_settings::JSONB);
END LOOP;
END;
$no_vessel$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_no_vessel_fn
IS 'init by pg_cron, check for user with no vessel register then send notification';
CREATE FUNCTION public.cron_no_metadata_fn() RETURNS void AS $no_metadata$
DECLARE
no_metadata_rec record;
user_settings jsonb;
BEGIN
-- Check for vessel register but with no metadata
RAISE NOTICE 'cron_no_metadata_fn';
FOR no_metadata_rec in
SELECT
a.user_id,a.email,a.first
FROM auth.accounts a, auth.vessels v
WHERE NOT EXISTS (
SELECT *
FROM api.metadata m
WHERE v.vessel_id = m.vessel_id) AND v.owner_email = a.email
LOOP
RAISE NOTICE '-> cron_process_no_metadata_rec_fn for [%]', no_metadata_rec;
SELECT json_build_object('email', no_metadata_rec.email, 'recipient', no_metadata_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_no_metadata_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_metadata'::TEXT, user_settings::JSONB);
END LOOP;
END;
$no_metadata$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_no_metadata_fn
IS 'init by pg_cron, check for vessel with no metadata then send notification';
CREATE FUNCTION public.cron_no_activity_fn() RETURNS void AS $no_activity$
DECLARE
no_activity_rec record;
user_settings jsonb;
BEGIN
-- Check for vessel with no activity for more than 230 days
RAISE NOTICE 'cron_no_activity_fn';
FOR no_activity_rec in
SELECT
v.owner_email,m.name,m.vessel_id,m.time,a.first
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '230 DAYS'
LOOP
RAISE NOTICE '-> cron_process_no_activity_rec_fn for [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_no_activity_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
END LOOP;
END;
$no_activity$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_no_activity_fn
IS 'init by pg_cron, check for vessel with no activity for more than 230 days then send notification';
CREATE FUNCTION public.cron_deactivated_fn() RETURNS void AS $deactivated$
DECLARE
no_activity_rec record;
user_settings jsonb;
BEGIN
RAISE NOTICE 'cron_deactivated_fn';
-- List accounts with vessel inactivity for more than 1 YEAR
FOR no_activity_rec in
SELECT
v.owner_email,m.name,m.vessel_id,m.time,a.first
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '1 YEAR'
LOOP
RAISE NOTICE '-> cron_process_deactivated_rec_fn for inactivity [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_deactivated_rec_fn inactivity [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
--PERFORM public.delete_account_fn(no_activity_rec.owner_email::TEXT, no_activity_rec.vessel_id::TEXT);
END LOOP;
-- List accounts with no vessel metadata for more than 1 YEAR
FOR no_activity_rec in
SELECT
a.user_id,a.email,a.first,a.created_at
FROM auth.accounts a, auth.vessels v
WHERE NOT EXISTS (
SELECT *
FROM api.metadata m
WHERE v.vessel_id = m.vessel_id) AND v.owner_email = a.email
AND v.created_at < NOW() AT TIME ZONE 'UTC' - INTERVAL '1 YEAR'
LOOP
RAISE NOTICE '-> cron_process_deactivated_rec_fn for no metadata [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_deactivated_rec_fn no metadata [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
--PERFORM public.delete_account_fn(no_activity_rec.owner_email::TEXT, no_activity_rec.vessel_id::TEXT);
END LOOP;
-- List accounts with no vessel created for more than 1 YEAR
FOR no_activity_rec in
SELECT a.user_id,a.email,a.first,a.created_at
FROM auth.accounts a
WHERE NOT EXISTS (
SELECT *
FROM auth.vessels v
WHERE v.owner_email = a.email)
AND a.created_at < NOW() AT TIME ZONE 'UTC' - INTERVAL '1 YEAR'
LOOP
RAISE NOTICE '-> cron_process_deactivated_rec_fn for no vessel [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_deactivated_rec_fn no vessel [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
--PERFORM public.delete_account_fn(no_activity_rec.owner_email::TEXT, no_activity_rec.vessel_id::TEXT);
END LOOP;
END;
$deactivated$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_deactivated_fn
IS 'init by pg_cron, check for vessel with no activity for more than 1 year then send notification and delete data';
DROP FUNCTION IF EXISTS public.cron_prune_otp_fn;
CREATE OR REPLACE FUNCTION public.cron_prune_otp_fn() RETURNS void
AS $$
DECLARE
otp_rec record;
BEGIN
-- Purge OTP older than 15 minutes
RAISE NOTICE 'cron_prune_otp_fn';
FOR otp_rec in
SELECT *
FROM auth.otp
WHERE otp_timestamp < NOW() AT TIME ZONE 'UTC' - INTERVAL '15 MINUTES'
ORDER BY otp_timestamp desc
LOOP
RAISE NOTICE '-> cron_prune_otp_fn deleting expired otp for user [%]', otp_rec.user_email;
-- remove entry
DELETE FROM auth.otp
WHERE user_email = otp_rec.user_email;
RAISE NOTICE '-> cron_prune_otp_fn deleted expire otp for user [%]', otp_rec.user_email;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_prune_otp_fn
IS 'init by pg_cron to purge older than 15 minutes OTP token';
DROP FUNCTION IF EXISTS public.cron_process_prune_otp_fn();
DROP FUNCTION IF EXISTS public.cron_process_no_vessel_fn();
DROP FUNCTION IF EXISTS public.cron_process_no_metadata_fn();
DROP FUNCTION IF EXISTS public.cron_process_no_activity_fn();
DROP FUNCTION IF EXISTS public.cron_process_deactivated_fn();
DROP FUNCTION IF EXISTS public.cron_process_windy_fn();
DROP FUNCTION IF EXISTS public.cron_process_alerts_fn();
-- Remove deprecated fn
DROP FUNCTION IF EXISTS public.cron_process_new_account_fn();
DROP FUNCTION IF EXISTS public.cron_process_new_account_otp_validation_fn();
DROP FUNCTION IF EXISTS public.cron_process_new_moorage_fn();
DROP FUNCTION IF EXISTS public.cron_process_new_vessel_fn();
CREATE OR REPLACE FUNCTION send_notification_fn(
IN email_type TEXT,
IN user_settings JSONB) RETURNS VOID
AS $send_notification$
DECLARE
app_settings JSONB;
_email_notifications BOOLEAN := False;
_phone_notifications BOOLEAN := False;
_pushover_user_key TEXT := NULL;
pushover_settings JSONB := NULL;
_telegram_notifications BOOLEAN := False;
_telegram_chat_id TEXT := NULL;
telegram_settings JSONB := NULL;
_email TEXT := NULL;
BEGIN
-- TODO input check
--RAISE NOTICE '--> send_notification_fn type [%]', email_type;
-- Gather notification app settings, eg: email, pushover, telegram
app_settings := get_app_settings_fn();
--RAISE NOTICE '--> send_notification_fn app_settings [%]', app_settings;
--RAISE NOTICE '--> user_settings [%]', user_settings->>'email'::TEXT;
-- Gather notifications settings and merge with user settings
-- Send notification email
SELECT preferences['email_notifications'] INTO _email_notifications
FROM auth.accounts a
WHERE a.email = user_settings->>'email'::TEXT;
RAISE NOTICE '--> send_notification_fn email_notifications [%]', _email_notifications;
-- If email server app settings set and if email user settings set
IF app_settings['app.email_server'] IS NOT NULL AND _email_notifications IS True THEN
PERFORM send_email_py_fn(email_type::TEXT, user_settings::JSONB, app_settings::JSONB);
END IF;
-- Send notification pushover
SELECT preferences['phone_notifications'],preferences->>'pushover_user_key' INTO _phone_notifications,_pushover_user_key
FROM auth.accounts a
WHERE a.email = user_settings->>'email'::TEXT;
RAISE NOTICE '--> send_notification_fn phone_notifications [%]', _phone_notifications;
-- If pushover app settings set and if pushover user settings set
IF app_settings['app.pushover_app_token'] IS NOT NULL AND _phone_notifications IS True AND _pushover_user_key IS NOT NULL THEN
SELECT json_build_object('pushover_user_key', _pushover_user_key) into pushover_settings;
SELECT user_settings::JSONB || pushover_settings::JSONB into user_settings;
--RAISE NOTICE '--> send_notification_fn user_settings + pushover [%]', user_settings;
PERFORM send_pushover_py_fn(email_type::TEXT, user_settings::JSONB, app_settings::JSONB);
END IF;
-- Send notification telegram
SELECT (preferences->'telegram'->'chat'->'id') IS NOT NULL,preferences['telegram']['chat']['id'] INTO _telegram_notifications,_telegram_chat_id
FROM auth.accounts a
WHERE a.email = user_settings->>'email'::TEXT;
RAISE NOTICE '--> send_notification_fn telegram_notifications [%]', _telegram_notifications;
-- If telegram app settings set and if telegram user settings set
IF app_settings['app.telegram_bot_token'] IS NOT NULL AND _telegram_notifications IS True AND _phone_notifications IS True THEN
SELECT json_build_object('telegram_chat_id', _telegram_chat_id) into telegram_settings;
SELECT user_settings::JSONB || telegram_settings::JSONB into user_settings;
--RAISE NOTICE '--> send_notification_fn user_settings + telegram [%]', user_settings;
PERFORM send_telegram_py_fn(email_type::TEXT, user_settings::JSONB, app_settings::JSONB);
END IF;
END;
$send_notification$ LANGUAGE plpgsql;
-- fn to trim new vessel name
CREATE FUNCTION new_vessel_trim_fn() RETURNS trigger AS $new_vessel_trim_fn$
BEGIN
NEW.name := TRIM(NEW.name);
RETURN NEW;
END;
$new_vessel_trim_fn$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.new_vessel_trim_fn
IS 'Trim space vessel name';
-- Trigger trim new vessel name
CREATE TRIGGER new_vessel_trim BEFORE INSERT ON auth.vessels
FOR EACH ROW EXECUTE FUNCTION public.new_vessel_trim_fn();
-- Description
COMMENT ON TRIGGER new_vessel_trim
ON auth.vessels
IS 'Trim space vessel name';
CREATE or REPLACE FUNCTION public.logbook_update_geojson_fn(IN _id integer, IN _start text, IN _end text,
OUT _track_geojson JSON
) AS $logbook_geojson$
declare
log_geojson jsonb;
metrics_geojson jsonb;
_map jsonb;
begin
-- GeoJson Feature Logbook linestring
SELECT
ST_AsGeoJSON(log.*) into log_geojson
FROM
( SELECT
id,name,
distance,
duration,
avg_speed,
max_speed,
max_wind_speed,
_from_time,
_to_time
_from_moorage_id,
_to_moorage_id,
notes,
track_geom
FROM api.logbook
WHERE id = _id
) AS log;
-- GeoJson Feature Metrics point
SELECT
json_agg(ST_AsGeoJSON(t.*)::json) into metrics_geojson
FROM (
( SELECT
time,
courseovergroundtrue,
speedoverground,
windspeedapparent,
longitude,latitude,
'' AS notes,
coalesce(metrics->'environment.wind.speedTrue', null) as truewindspeed,
coalesce(metrics->'environment.wind.directionTrue', null) as truewinddirection,
coalesce(status, null) as status,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND time >= _start::TIMESTAMPTZ
AND time <= _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
ORDER BY m.time ASC
)
) AS t;
-- Merge jsonb
SELECT log_geojson::jsonb || metrics_geojson::jsonb into _map;
-- output
SELECT
json_build_object(
'type', 'FeatureCollection',
'features', _map
) into _track_geojson;
END;
$logbook_geojson$ LANGUAGE plpgsql;
-- Update version
UPDATE public.app_settings
SET value='0.7.0'
WHERE "name"='app.version';
-- Create a cron job
\c postgres
UPDATE cron.job
SET command='select public.cron_prune_otp_fn()'
WHERE jobname = 'cron_prune_otp';
UPDATE cron.job
SET command='select public.cron_no_vessel_fn()'
WHERE jobname = 'cron_no_vessel';
UPDATE cron.job
SET command='select public.cron_no_metadata_fn()'
WHERE jobname = 'cron_no_metadata';
UPDATE cron.job
SET command='select public.cron_no_activity_fn()'
WHERE jobname = 'cron_no_activity';
UPDATE cron.job
SET command='select public.cron_windy_fn()'
WHERE jobname = 'cron_windy';
UPDATE cron.job
SET command='select public.cron_alerts_fn()'
WHERE jobname = 'cron_alerts';

View File

@@ -0,0 +1,528 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration March 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Force timezone, just in case'
set timezone to 'UTC';
CREATE OR REPLACE FUNCTION public.process_lat_lon_fn(IN lon NUMERIC, IN lat NUMERIC,
OUT moorage_id INTEGER,
OUT moorage_type INTEGER,
OUT moorage_name TEXT,
OUT moorage_country TEXT
) AS $process_lat_lon$
DECLARE
stay_rec record;
--moorage_id INTEGER := NULL;
--moorage_type INTEGER := 1; -- Unknown
--moorage_name TEXT := NULL;
--moorage_country TEXT := NULL;
existing_rec record;
geo jsonb;
overpass jsonb;
BEGIN
RAISE NOTICE '-> process_lat_lon_fn';
IF lon IS NULL OR lat IS NULL THEN
RAISE WARNING '-> process_lat_lon_fn invalid input lon %, lat %', lon, lat;
--return NULL;
END IF;
-- Do we have an existing moorages within 300m of the new stay
FOR existing_rec in
SELECT
*
FROM api.moorages m
WHERE
m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND m.geog IS NOT NULL
AND ST_DWithin(
Geography(ST_MakePoint(m.longitude, m.latitude)),
Geography(ST_MakePoint(lon, lat)),
300 -- in meters
)
AND m.vessel_id = current_setting('vessel.id', false)
ORDER BY id ASC
LOOP
-- found previous stay within 300m of the new moorage
IF existing_rec.id IS NOT NULL AND existing_rec.id > 0 THEN
RAISE NOTICE '-> process_lat_lon_fn found previous moorages within 300m %', existing_rec;
EXIT; -- exit loop
END IF;
END LOOP;
-- if with in 300m use existing name and stay_code
-- else insert new entry
IF existing_rec.id IS NOT NULL AND existing_rec.id > 0 THEN
RAISE NOTICE '-> process_lat_lon_fn found close by moorage using existing name and stay_code %', existing_rec;
moorage_id := existing_rec.id;
moorage_name := existing_rec.name;
moorage_type := existing_rec.stay_code;
ELSE
RAISE NOTICE '-> process_lat_lon_fn create new moorage';
-- query overpass api to guess moorage type
overpass := overpass_py_fn(lon::NUMERIC, lat::NUMERIC);
RAISE NOTICE '-> process_lat_lon_fn overpass name:[%] seamark:type:[%]', overpass->'name', overpass->'seamark:type';
moorage_type = 1; -- Unknown
IF overpass->>'seamark:type' = 'harbour' AND overpass->>'seamark:harbour:category' = 'marina' then
moorage_type = 4; -- Dock
ELSIF overpass->>'seamark:type' = 'mooring' AND overpass->>'seamark:mooring:category' = 'buoy' then
moorage_type = 3; -- Mooring Buoy
ELSIF overpass->>'seamark:type' ~ '(anchorage|anchor_berth|berth)' OR overpass->>'natural' ~ '(bay|beach)' then
moorage_type = 2; -- Anchor
ELSIF overpass->>'seamark:type' = 'mooring' then
moorage_type = 3; -- Mooring Buoy
ELSIF overpass->>'leisure' = 'marina' then
moorage_type = 4; -- Dock
END IF;
-- geo reverse _lng _lat
geo := reverse_geocode_py_fn('nominatim', lon::NUMERIC, lat::NUMERIC);
moorage_country := geo->>'country_code';
IF overpass->>'name:en' IS NOT NULL then
moorage_name = overpass->>'name:en';
ELSIF overpass->>'name' IS NOT NULL then
moorage_name = overpass->>'name';
ELSE
moorage_name := geo->>'name';
END IF;
RAISE NOTICE '-> process_lat_lon_fn output name:[%] type:[%]', moorage_name, moorage_type;
RAISE NOTICE '-> process_lat_lon_fn insert new moorage for [%] name:[%] type:[%]', current_setting('vessel.id', false), moorage_name, moorage_type;
-- Insert new moorage from stay
INSERT INTO api.moorages
(vessel_id, name, country, stay_code, reference_count, latitude, longitude, geog, overpass, nominatim)
VALUES (
current_setting('vessel.id', false),
coalesce(moorage_name, null),
coalesce(moorage_country, null),
moorage_type,
1,
lat,
lon,
Geography(ST_MakePoint(lon, lat)),
coalesce(overpass, null),
coalesce(geo, null)
) returning id into moorage_id;
-- Add moorage entry to process queue for reference
--INSERT INTO process_queue (channel, payload, stored, ref_id, processed)
-- VALUES ('new_moorage', moorage_id, now(), current_setting('vessel.id', true), now());
END IF;
--return json_build_object(
-- 'id', moorage_id,
-- 'name', moorage_name,
-- 'type', moorage_type
-- )::jsonb;
END;
$process_lat_lon$ LANGUAGE plpgsql;
CREATE or replace FUNCTION public.logbook_update_geojson_fn(IN _id integer, IN _start text, IN _end text,
OUT _track_geojson JSON
) AS $logbook_geojson$
declare
log_geojson jsonb;
metrics_geojson jsonb;
_map jsonb;
begin
-- GeoJson Feature Logbook linestring
SELECT
ST_AsGeoJSON(log.*) into log_geojson
FROM
( SELECT
id,name,
distance,
duration,
avg_speed,
max_speed,
max_wind_speed,
_from_time,
_to_time
_from_moorage_id,
_to_moorage_id,
notes,
track_geom
FROM api.logbook
WHERE id = _id
) AS log;
-- GeoJson Feature Metrics point
SELECT
json_agg(ST_AsGeoJSON(t.*)::json) into metrics_geojson
FROM (
( SELECT
time,
courseovergroundtrue,
speedoverground,
windspeedapparent,
longitude,latitude,
'' AS notes,
coalesce(metersToKnots((metrics->'environment.wind.speedTrue')::NUMERIC), null) as truewindspeed,
coalesce(radiantToDegrees((metrics->'environment.wind.directionTrue')::NUMERIC), null) as truewinddirection,
coalesce(status, null) as status,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND time >= _start::TIMESTAMPTZ
AND time <= _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
ORDER BY m.time ASC
)
) AS t;
-- Merge jsonb
SELECT log_geojson::jsonb || metrics_geojson::jsonb into _map;
-- output
SELECT
json_build_object(
'type', 'FeatureCollection',
'features', _map
) into _track_geojson;
END;
$logbook_geojson$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION public.metersToKnots(IN meters NUMERIC)
RETURNS NUMERIC
AS $$
BEGIN
RETURN ROUND(((meters * 1.9438445) * 10) / 10, 2);
END
$$
LANGUAGE plpgsql IMMUTABLE;
-- Description
COMMENT ON FUNCTION
public.metersToKnots
IS 'convert speed meters/s To Knots';
CREATE OR REPLACE FUNCTION logbook_update_extra_json_fn(IN _id integer, IN _start text, IN _end text,
OUT _extra_json JSON
) AS $logbook_extra_json$
declare
obs_json jsonb default '{ "seaState": -1, "cloudCoverage": -1, "visibility": -1}'::jsonb;
log_json jsonb default '{}'::jsonb;
runtime_json jsonb default '{}'::jsonb;
metrics_json jsonb default '{}'::jsonb;
metric_rec record;
BEGIN
-- Calculate 'navigation.log' metrics
WITH
start_trip as (
-- Fetch 'navigation.log' start, first entry
SELECT key, value
FROM api.metrics m,
jsonb_each_text(m.metrics)
WHERE key ILIKE 'navigation.log'
AND time = _start::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
),
end_trip as (
-- Fetch 'navigation.log' end, last entry
SELECT key, value
FROM api.metrics m,
jsonb_each_text(m.metrics)
WHERE key ILIKE 'navigation.log'
AND time = _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
),
nm as (
-- calculate distance and convert meter to nautical miles
SELECT ((end_trip.value::NUMERIC - start_trip.value::numeric) * 0.00053996) as trip from start_trip,end_trip
)
-- Generate JSON
SELECT jsonb_build_object('navigation.log', trip) INTO log_json FROM nm;
RAISE NOTICE '-> logbook_update_extra_json_fn navigation.log: %', log_json;
-- Calculate engine hours from propulsion.%.runTime first entry
FOR metric_rec IN
SELECT key, value
FROM api.metrics m,
jsonb_each_text(m.metrics)
WHERE key ILIKE 'propulsion.%.runTime'
AND time = _start::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
LOOP
-- Engine Hours in seconds
RAISE NOTICE '-> logbook_update_extra_json_fn propulsion.*.runTime: %', metric_rec;
with
end_runtime AS (
-- Fetch 'propulsion.*.runTime' last entry
SELECT key, value
FROM api.metrics m,
jsonb_each_text(m.metrics)
WHERE key ILIKE metric_rec.key
AND time = _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
),
runtime AS (
-- calculate runTime Engine Hours as ISO duration
--SELECT (end_runtime.value::numeric - metric_rec.value::numeric) AS value FROM end_runtime
SELECT (((end_runtime.value::numeric - metric_rec.value::numeric) / 3600) * '1 hour'::interval)::interval as value FROM end_runtime
)
-- Generate JSON
SELECT jsonb_build_object(metric_rec.key, runtime.value) INTO runtime_json FROM runtime;
RAISE NOTICE '-> logbook_update_extra_json_fn key: %, value: %', metric_rec.key, runtime_json;
END LOOP;
-- Update logbook with extra value and return json
SELECT COALESCE(log_json::JSONB, '{}'::jsonb) || COALESCE(runtime_json::JSONB, '{}'::jsonb) INTO metrics_json;
SELECT jsonb_build_object('metrics', metrics_json, 'observations', obs_json) INTO _extra_json;
RAISE NOTICE '-> logbook_update_extra_json_fn log_json: %, runtime_json: %, _extra_json: %', log_json, runtime_json, _extra_json;
END;
$logbook_extra_json$ LANGUAGE plpgsql;
DROP FUNCTION IF EXISTS public.logbook_update_gpx_fn();
CREATE OR REPLACE FUNCTION metadata_upsert_trigger_fn() RETURNS trigger AS $metadata_upsert$
DECLARE
metadata_id integer;
metadata_active boolean;
BEGIN
-- Set client_id to new value to allow RLS
--PERFORM set_config('vessel.client_id', NEW.client_id, false);
-- UPSERT - Insert vs Update for Metadata
--RAISE NOTICE 'metadata_upsert_trigger_fn';
--PERFORM set_config('vessel.id', NEW.vessel_id, true);
--RAISE WARNING 'metadata_upsert_trigger_fn [%] [%]', current_setting('vessel.id', true), NEW;
SELECT m.id,m.active INTO metadata_id, metadata_active
FROM api.metadata m
WHERE m.vessel_id IS NOT NULL AND m.vessel_id = current_setting('vessel.id', true);
--RAISE NOTICE 'metadata_id is [%]', metadata_id;
IF metadata_id IS NOT NULL THEN
-- send notification if boat is back online
IF metadata_active is False THEN
-- Add monitor online entry to process queue for later notification
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('monitoring_online', metadata_id, now(), current_setting('vessel.id', true));
END IF;
-- Update vessel metadata
UPDATE api.metadata
SET
name = NEW.name,
mmsi = NEW.mmsi,
client_id = NEW.client_id,
length = NEW.length,
beam = NEW.beam,
height = NEW.height,
ship_type = NEW.ship_type,
plugin_version = NEW.plugin_version,
signalk_version = NEW.signalk_version,
platform = REGEXP_REPLACE(NEW.platform, '[^a-zA-Z0-9\(\) ]', '', 'g'),
configuration = NEW.configuration,
-- time = NEW.time, ignore the time sent by the vessel as it is out of sync sometimes.
time = NOW(), -- overwrite the time sent by the vessel
active = true
WHERE id = metadata_id;
RETURN NULL; -- Ignore insert
ELSE
IF NEW.vessel_id IS NULL THEN
-- set vessel_id from jwt if not present in INSERT query
NEW.vessel_id := current_setting('vessel.id');
END IF;
-- Ignore and overwrite the time sent by the vessel
NEW.time := NOW();
-- Insert new vessel metadata
RETURN NEW; -- Insert new vessel metadata
END IF;
END;
$metadata_upsert$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION public.cron_windy_fn() RETURNS void AS $$
DECLARE
windy_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ := NOW();
metric_rec record;
windy_metric jsonb;
app_settings jsonb;
user_settings jsonb;
windy_pws jsonb;
BEGIN
-- Check for new observations pending update
RAISE NOTICE 'cron_process_windy_fn';
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Find users with Windy active and with an active vessel
-- Map account id to Windy Station ID
FOR windy_rec in
SELECT
a.id,a.email,v.vessel_id,v.name,
COALESCE((a.preferences->'windy_last_metric')::TEXT, default_last_metric::TEXT) as last_metric
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'public_windy')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_process_windy_fn for [%]', windy_rec;
PERFORM set_config('vessel.id', windy_rec.vessel_id, false);
--RAISE WARNING 'public.cron_process_windy_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(windy_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_process_windy_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last windy_last_metric avg by 5 minutes
-- TODO json_agg to send all data in once, but issue with py jsonb transformation decimal.
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.outside.temperature')::numeric) AS temperature,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.outside.relativeHumidity')::numeric) AS rh,
avg((m.metrics->'environment.wind.directionTrue')::numeric) AS winddir,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
max((m.metrics->'environment.wind.speedTrue')::numeric) AS gust,
last(latitude, time) AS lat,
last(longitude, time) AS lng
FROM api.metrics m
WHERE vessel_id = windy_rec.vessel_id
AND m.time >= windy_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_process_windy_fn checking metrics [%]', metric_rec;
IF metric_rec.wind is null or metric_rec.temperature is null THEN
-- Ignore when there is no metrics.
-- Send notification
PERFORM send_notification_fn('windy_error'::TEXT, user_settings::JSONB);
-- Disable windy
PERFORM api.update_user_preferences_fn('{public_windy}'::TEXT, 'false'::TEXT);
RETURN;
END IF;
-- https://community.windy.com/topic/8168/report-your-weather-station-data-to-windy
-- temp from kelvin to celcuis
-- winddir from radiant to degres
-- rh from ratio to percentage
SELECT jsonb_build_object(
'dateutc', metric_rec.time_bucket,
'station', windy_rec.id,
'name', windy_rec.name,
'lat', metric_rec.lat,
'lon', metric_rec.lng,
'wind', metric_rec.wind,
'gust', metric_rec.gust,
'pressure', metric_rec.pressure,
'winddir', radiantToDegrees(metric_rec.winddir::numeric),
'temp', kelvinToCel(metric_rec.temperature::numeric),
'rh', valToPercent(metric_rec.rh::numeric)
) INTO windy_metric;
RAISE NOTICE '-> cron_process_windy_fn checking windy_metrics [%]', windy_metric;
SELECT windy_pws_py_fn(windy_metric, user_settings, app_settings) into windy_pws;
RAISE NOTICE '-> cron_process_windy_fn Windy PWS [%]', ((windy_pws->'header')::JSONB ? 'id');
IF NOT((user_settings->'settings')::JSONB ? 'windy') and ((windy_pws->'header')::JSONB ? 'id') then
RAISE NOTICE '-> cron_process_windy_fn new Windy PWS [%]', (windy_pws->'header')::JSONB->>'id';
-- Send metrics to Windy
PERFORM api.update_user_preferences_fn('{windy}'::TEXT, ((windy_pws->'header')::JSONB->>'id')::TEXT);
-- Send notification
PERFORM send_notification_fn('windy'::TEXT, user_settings::JSONB);
-- Refresh user settings after first success
user_settings := get_user_settings_from_vesselid_fn(windy_rec.vessel_id::TEXT);
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{windy_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$$ language plpgsql;
DROP FUNCTION public.delete_vessel_fn;
CREATE OR REPLACE FUNCTION public.delete_vessel_fn(IN _vessel_id TEXT) RETURNS JSONB
AS $delete_vessel$
DECLARE
total_metrics INTEGER;
del_metrics INTEGER;
del_logs INTEGER;
del_stays INTEGER;
del_moorages INTEGER;
del_queue INTEGER;
out_json JSONB;
BEGIN
select count(*) INTO total_metrics from api.metrics m where vessel_id = _vessel_id;
WITH deleted AS (delete from api.metrics m where vessel_id = _vessel_id RETURNING *) SELECT count(*) INTO del_metrics FROM deleted;
WITH deleted AS (delete from api.logbook l where vessel_id = _vessel_id RETURNING *) SELECT count(*) INTO del_logs FROM deleted;
WITH deleted AS (delete from api.stays s where vessel_id = _vessel_id RETURNING *) SELECT count(*) INTO del_stays FROM deleted;
WITH deleted AS (delete from api.moorages m where vessel_id = _vessel_id RETURNING *) SELECT count(*) INTO del_moorages FROM deleted;
WITH deleted AS (delete from public.process_queue m where ref_id = _vessel_id RETURNING *) SELECT count(*) INTO del_queue FROM deleted;
SELECT jsonb_build_object('total_metrics', total_metrics,
'del_metrics', del_metrics,
'del_logs', del_logs,
'del_stays', del_stays,
'del_moorages', del_moorages,
'del_queue', del_queue) INTO out_json;
RETURN out_json;
END
$delete_vessel$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.delete_vessel_fn
IS 'Delete all vessel data (metrics,logbook,stays,moorages,process_queue) for a vessel_id';
DROP FUNCTION IF EXISTS public.cron_process_no_activity_fn();
CREATE OR REPLACE FUNCTION public.cron_process_no_activity_fn() RETURNS void AS $no_activity$
DECLARE
no_activity_rec record;
user_settings jsonb;
total_metrics INTEGER;
total_logs INTEGER;
del_metrics INTEGER;
out_json JSONB;
BEGIN
-- Check for vessel with no activity for more than 230 days
RAISE NOTICE 'cron_process_no_activity_fn';
FOR no_activity_rec in
SELECT
v.owner_email,m.name,m.vessel_id,m.time,a.first
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '230 DAYS'
AND v.owner_email <> 'demo@openplotter.cloud'
ORDER BY m.time DESC
LOOP
RAISE NOTICE '-> cron_process_no_activity_rec_fn for [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_no_activity_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
SELECT count(*) INTO total_metrics from api.metrics where vessel_id = no_activity_rec.vessel_id;
WITH deleted AS (delete from api.metrics m where vessel_id = no_activity_rec.vessel_id RETURNING *) SELECT count(*) INTO del_metrics FROM deleted;
SELECT count(*) INTO total_logs from api.logbook where vessel_id = no_activity_rec.vessel_id;
SELECT jsonb_build_object('total_metrics', total_metrics, 'total_logs', total_logs, 'del_metrics', del_metrics) INTO out_json;
RAISE NOTICE '-> debug cron_process_no_activity_rec_fn [%]', out_json;
END LOOP;
END;
$no_activity$ language plpgsql;
DROP FUNCTION public.delete_account_fn(text,text);
CREATE OR REPLACE FUNCTION public.delete_account_fn(IN _email TEXT, IN _vessel_id TEXT) RETURNS JSONB
AS $delete_account$
DECLARE
del_vessel_data JSONB;
del_meta INTEGER;
del_vessel INTEGER;
del_account INTEGER;
out_json JSONB;
BEGIN
SELECT public.delete_vessel_fn(_vessel_id) INTO del_vessel_data;
WITH deleted AS (delete from api.metadata where vessel_id = _vessel_id RETURNING *) SELECT count(*) INTO del_meta FROM deleted;
WITH deleted AS (delete from auth.vessels where vessel_id = _vessel_id RETURNING *) SELECT count(*) INTO del_vessel FROM deleted;
WITH deleted AS (delete from auth.accounts where email = _email RETURNING *) SELECT count(*) INTO del_account FROM deleted;
SELECT jsonb_build_object('del_metadata', del_meta,
'del_vessel', del_vessel,
'del_account', del_account) || del_vessel_data INTO out_json;
RETURN out_json;
END
$delete_account$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.delete_account_fn
IS 'Delete all data for a account by email and vessel_id';
-- Update version
UPDATE public.app_settings
SET value='0.7.1'
WHERE "name"='app.version';

View File

@@ -0,0 +1,625 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration April 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
UPDATE public.email_templates
SET email_content='Hello __RECIPIENT__,
Sorry!We could not convert your boat into a Windy Personal Weather Station due to missing data (temperature, wind or pressure).
Windy Personal Weather Station is now disable.'
WHERE "name"='windy_error';
CREATE OR REPLACE FUNCTION public.cron_windy_fn() RETURNS void AS $$
DECLARE
windy_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ := NOW();
metric_rec record;
windy_metric jsonb;
app_settings jsonb;
user_settings jsonb;
windy_pws jsonb;
BEGIN
-- Check for new observations pending update
RAISE NOTICE 'cron_process_windy_fn';
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Find users with Windy active and with an active vessel
-- Map account id to Windy Station ID
FOR windy_rec in
SELECT
a.id,a.email,v.vessel_id,v.name,
COALESCE((a.preferences->'windy_last_metric')::TEXT, default_last_metric::TEXT) as last_metric
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'public_windy')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_process_windy_fn for [%]', windy_rec;
PERFORM set_config('vessel.id', windy_rec.vessel_id, false);
--RAISE WARNING 'public.cron_process_windy_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(windy_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_process_windy_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last windy_last_metric avg by 5 minutes
-- TODO json_agg to send all data in once, but issue with py jsonb transformation decimal.
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.outside.temperature')::numeric) AS temperature,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.outside.relativeHumidity')::numeric) AS rh,
avg((m.metrics->'environment.wind.directionTrue')::numeric) AS winddir,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
max((m.metrics->'environment.wind.speedTrue')::numeric) AS gust,
last(latitude, time) AS lat,
last(longitude, time) AS lng
FROM api.metrics m
WHERE vessel_id = windy_rec.vessel_id
AND m.time >= windy_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_process_windy_fn checking metrics [%]', metric_rec;
if metric_rec.wind is null or metric_rec.temperature is null
or metric_rec.pressure is null or metric_rec.rh is null then
-- Ignore when there is no metrics.
-- Send notification
PERFORM send_notification_fn('windy_error'::TEXT, user_settings::JSONB);
-- Disable windy
PERFORM api.update_user_preferences_fn('{public_windy}'::TEXT, 'false'::TEXT);
RETURN;
end if;
-- https://community.windy.com/topic/8168/report-your-weather-station-data-to-windy
-- temp from kelvin to Celsius
-- winddir from radiant to degrees
-- rh from ratio to percentage
SELECT jsonb_build_object(
'dateutc', metric_rec.time_bucket,
'station', windy_rec.id,
'name', windy_rec.name,
'lat', metric_rec.lat,
'lon', metric_rec.lng,
'wind', metric_rec.wind,
'gust', metric_rec.gust,
'pressure', metric_rec.pressure,
'winddir', radiantToDegrees(metric_rec.winddir::numeric),
'temp', kelvinToCel(metric_rec.temperature::numeric),
'rh', valToPercent(metric_rec.rh::numeric)
) INTO windy_metric;
RAISE NOTICE '-> cron_process_windy_fn checking windy_metrics [%]', windy_metric;
SELECT windy_pws_py_fn(windy_metric, user_settings, app_settings) into windy_pws;
RAISE NOTICE '-> cron_process_windy_fn Windy PWS [%]', ((windy_pws->'header')::JSONB ? 'id');
IF NOT((user_settings->'settings')::JSONB ? 'windy') and ((windy_pws->'header')::JSONB ? 'id') then
RAISE NOTICE '-> cron_process_windy_fn new Windy PWS [%]', (windy_pws->'header')::JSONB->>'id';
-- Send metrics to Windy
PERFORM api.update_user_preferences_fn('{windy}'::TEXT, ((windy_pws->'header')::JSONB->>'id')::TEXT);
-- Send notification
PERFORM send_notification_fn('windy'::TEXT, user_settings::JSONB);
-- Refresh user settings after first success
user_settings := get_user_settings_from_vesselid_fn(windy_rec.vessel_id::TEXT);
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{windy_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$$ language plpgsql;
-- Add security definer, run this function as admin to avoid weird bug
-- ERROR: variable not found in subplan target list
CREATE OR REPLACE FUNCTION api.delete_logbook_fn(IN _id integer) RETURNS BOOLEAN AS $delete_logbook$
DECLARE
logbook_rec record;
previous_stays_id numeric;
current_stays_departed text;
current_stays_id numeric;
current_stays_active boolean;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> delete_logbook_fn invalid input %', _id;
RETURN FALSE;
END IF;
-- Get the logbook record with all necessary fields exist
SELECT * INTO logbook_rec
FROM api.logbook
WHERE id = _id;
-- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> delete_logbook_fn invalid logbook %', _id;
RETURN FALSE;
END IF;
-- Update logbook
UPDATE api.logbook l
SET notes = 'mark for deletion'
WHERE l.vessel_id = current_setting('vessel.id', false)
AND id = logbook_rec.id;
-- Update metrics status to moored
-- This generate an error when run as user_role "variable not found in subplan target list"
UPDATE api.metrics
SET status = 'moored'
WHERE time >= logbook_rec._from_time
AND time <= logbook_rec._to_time
AND vessel_id = current_setting('vessel.id', false);
-- Get related stays
SELECT id,departed,active INTO current_stays_id,current_stays_departed,current_stays_active
FROM api.stays s
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived = logbook_rec._to_time;
-- Update related stays
UPDATE api.stays s
SET notes = 'mark for deletion'
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived = logbook_rec._to_time;
-- Find previous stays
SELECT id INTO previous_stays_id
FROM api.stays s
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived < logbook_rec._to_time
ORDER BY s.arrived DESC LIMIT 1;
-- Update previous stays with the departed time from current stays
-- and set the active state from current stays
UPDATE api.stays
SET departed = current_stays_departed::TIMESTAMPTZ,
active = current_stays_active
WHERE vessel_id = current_setting('vessel.id', false)
AND id = previous_stays_id;
-- Clean up, remove invalid logbook and stay entry
DELETE FROM api.logbook WHERE id = logbook_rec.id;
RAISE WARNING '-> delete_logbook_fn delete logbook [%]', logbook_rec.id;
DELETE FROM api.stays WHERE id = current_stays_id;
RAISE WARNING '-> delete_logbook_fn delete stays [%]', current_stays_id;
-- Clean up, Subtract (-1) moorages ref count
UPDATE api.moorages
SET reference_count = reference_count - 1
WHERE vessel_id = current_setting('vessel.id', false)
AND id = previous_stays_id;
RETURN TRUE;
END;
$delete_logbook$ LANGUAGE plpgsql security definer;
-- Allow users to update certain columns on specific TABLES on API schema add reference_count, when deleting a log
GRANT UPDATE (name, notes, stay_code, home_flag, reference_count) ON api.moorages TO user_role;
-- Allow users to update certain columns on specific TABLES on API schema add track_geojson
GRANT UPDATE (name, _from, _to, notes, track_geojson) ON api.logbook TO user_role;
DROP FUNCTION IF EXISTS api.timelapse2_fn;
CREATE OR REPLACE FUNCTION api.timelapse2_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL,
IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL,
OUT geojson JSONB) RETURNS JSONB AS $timelapse2$
DECLARE
_geojson jsonb;
BEGIN
-- Using sub query to force id order by time
-- User can now directly edit the json to add comment or remove track point
-- Merge json track_geojson with Geometry Point into a single GeoJSON Points
--raise WARNING 'input % % %' , start_log, end_log, public.isnumeric(end_log::text);
IF start_log IS NOT NULL AND end_log IS NULL THEN
end_log := start_log;
END IF;
IF start_date IS NOT NULL AND end_date IS NULL THEN
end_date := start_date;
END IF;
--raise WARNING 'input % % %' , start_log, end_log, public.isnumeric(end_log::text);
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'Point'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l.id >= start_log
AND l.id <= end_log
AND l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
ELSIF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'Point'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l._from_time >= start_date::TIMESTAMPTZ
AND l._to_time <= end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
AND l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
ELSE
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'Point'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
END IF;
-- Return a GeoJSON MultiLineString
-- result _geojson [null, null]
--RAISE WARNING 'result _geojson %' , _geojson;
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', _geojson ) INTO geojson;
END;
$timelapse2$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.timelapse2_fn
IS 'Export all selected logs geojson `track_geojson` to a geojson as points including properties';
-- Allow timelapse2_fn execution for user_role and api_anonymous (public replay)
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
GRANT EXECUTE ON FUNCTION api.timelapse2_fn TO api_anonymous;
DROP FUNCTION IF EXISTS public.process_logbook_queue_fn;
CREATE OR REPLACE FUNCTION public.process_logbook_queue_fn(IN _id integer) RETURNS void AS $process_logbook_queue$
DECLARE
logbook_rec record;
from_name text;
to_name text;
log_name text;
from_moorage record;
to_moorage record;
avg_rec record;
geo_rec record;
log_settings jsonb;
user_settings jsonb;
geojson jsonb;
extra_json jsonb;
trip_note jsonb;
from_moorage_note jsonb;
to_moorage_note jsonb;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> process_logbook_queue_fn invalid input %', _id;
RETURN;
END IF;
-- Get the logbook record with all necessary fields exist
SELECT * INTO logbook_rec
FROM api.logbook
WHERE active IS false
AND id = _id
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> process_logbook_queue_fn invalid logbook %', _id;
RETURN;
END IF;
PERFORM set_config('vessel.id', logbook_rec.vessel_id, false);
--RAISE WARNING 'public.process_logbook_queue_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Calculate logbook data average and geo
-- Update logbook entry with the latest metric data and calculate data
avg_rec := logbook_update_avg_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
geo_rec := logbook_update_geom_distance_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
-- Do we have an existing moorage within 300m of the new log
-- generate logbook name, concat _from_location and _to_location from moorage name
from_moorage := process_lat_lon_fn(logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
to_moorage := process_lat_lon_fn(logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC);
SELECT CONCAT(from_moorage.moorage_name, ' to ' , to_moorage.moorage_name) INTO log_name;
-- Process `propulsion.*.runTime` and `navigation.log`
-- Calculate extra json
extra_json := logbook_update_extra_json_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
RAISE NOTICE 'Updating valid logbook entry logbook id:[%] start:[%] end:[%]', logbook_rec.id, logbook_rec._from_time, logbook_rec._to_time;
UPDATE api.logbook
SET
duration = (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ),
avg_speed = avg_rec.avg_speed,
max_speed = avg_rec.max_speed,
max_wind_speed = avg_rec.max_wind_speed,
_from = from_moorage.moorage_name,
_from_moorage_id = from_moorage.moorage_id,
_to_moorage_id = to_moorage.moorage_id,
_to = to_moorage.moorage_name,
name = log_name,
track_geom = geo_rec._track_geom,
distance = geo_rec._track_distance,
extra = extra_json,
notes = NULL -- reset pre_log process
WHERE id = logbook_rec.id;
-- GeoJSON require track_geom field
geojson := logbook_update_geojson_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
UPDATE api.logbook
SET
track_geojson = geojson
WHERE id = logbook_rec.id;
-- Add trip details name as note for the first geometry point entry from the GeoJSON
SELECT format('{"trip": { "name": "%s", "duration": "%s", "distance": "%s" }}', logbook_rec.name, logbook_rec.duration, logbook_rec.distance) into trip_note;
-- Update the properties of the first feature
UPDATE api.logbook
SET track_geojson = jsonb_set(
track_geojson,
'{features, 1, properties}',
(track_geojson -> 'features' -> 1 -> 'properties' || trip_note)::jsonb
)
WHERE id = logbook_rec.id
and track_geojson -> 'features' -> 1 -> 'geometry' ->> 'type' = 'Point';
-- Add moorage name as note for the third and last entry of the GeoJSON
SELECT format('{"notes": "%s"}', from_moorage.moorage_name) into from_moorage_note;
-- Update the properties of the third feature, the second with geometry point
UPDATE api.logbook
SET track_geojson = jsonb_set(
track_geojson,
'{features, 2, properties}',
(track_geojson -> 'features' -> 2 -> 'properties' || from_moorage_note)::jsonb
)
WHERE id = logbook_rec.id
AND track_geojson -> 'features' -> 2 -> 'geometry' ->> 'type' = 'Point';
-- Update the note properties of the last feature with geometry point
SELECT format('{"notes": "%s"}', to_moorage.moorage_name) into to_moorage_note;
UPDATE api.logbook
SET track_geojson = jsonb_set(
track_geojson,
'{features, -1, properties}',
CASE
WHEN COALESCE((track_geojson -> 'features' -> -1 -> 'properties' ->> 'notes'), '') = '' THEN
(track_geojson -> 'features' -> -1 -> 'properties' || to_moorage_note)::jsonb
ELSE
track_geojson -> 'features' -> -1 -> 'properties'
END
)
WHERE id = logbook_rec.id
AND track_geojson -> 'features' -> -1 -> 'geometry' ->> 'type' = 'Point';
-- Prepare notification, gather user settings
SELECT json_build_object('logbook_name', log_name, 'logbook_link', logbook_rec.id) into log_settings;
user_settings := get_user_settings_from_vesselid_fn(logbook_rec.vessel_id::TEXT);
SELECT user_settings::JSONB || log_settings::JSONB into user_settings;
RAISE NOTICE '-> debug process_logbook_queue_fn get_user_settings_from_vesselid_fn [%]', user_settings;
RAISE NOTICE '-> debug process_logbook_queue_fn log_settings [%]', log_settings;
-- Send notification
PERFORM send_notification_fn('logbook'::TEXT, user_settings::JSONB);
-- Process badges
RAISE NOTICE '-> debug process_logbook_queue_fn user_settings [%]', user_settings->>'email'::TEXT;
PERFORM set_config('user.email', user_settings->>'email'::TEXT, false);
PERFORM badges_logbook_fn(logbook_rec.id, logbook_rec._to_time::TEXT);
PERFORM badges_geom_fn(logbook_rec.id, logbook_rec._to_time::TEXT);
END;
$process_logbook_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_logbook_queue_fn
IS 'Update logbook details when completed, logbook_update_avg_fn, logbook_update_geom_distance_fn, reverse_geocode_py_fn';
-- Update the pre.check for the new timelapse function
CREATE OR REPLACE FUNCTION public.check_jwt() RETURNS void AS $$
-- Prevent unregister user or unregister vessel access
-- Allow anonymous access
-- Need to be refactor and simplify, specially the anonymous part.
DECLARE
_role name;
_email text;
anonymous record;
_path name;
_vid text;
_vname text;
boat TEXT;
_pid INTEGER := 0; -- public_id
_pvessel TEXT := NULL; -- public_type
_ptype TEXT := NULL; -- public_type
_ppath BOOLEAN := False; -- public_path
_pvalid BOOLEAN := False; -- public_valid
_pheader text := NULL; -- public_header
valid_public_type BOOLEAN := False;
account_rec record;
vessel_rec record;
BEGIN
-- Extract email and role from jwt token
--RAISE WARNING 'check_jwt jwt %', current_setting('request.jwt.claims', true);
SELECT current_setting('request.jwt.claims', true)::json->>'email' INTO _email;
PERFORM set_config('user.email', _email, false);
SELECT current_setting('request.jwt.claims', true)::json->>'role' INTO _role;
--RAISE WARNING 'jwt email %', current_setting('request.jwt.claims', true)::json->>'email';
--RAISE WARNING 'jwt role %', current_setting('request.jwt.claims', true)::json->>'role';
--RAISE WARNING 'cur_user %', current_user;
--TODO SELECT current_setting('request.jwt.uid', true)::json->>'uid' INTO _user_id;
--TODO RAISE WARNING 'jwt user_id %', current_setting('request.jwt.uid', true)::json->>'uid';
--TODO SELECT current_setting('request.jwt.vid', true)::json->>'vid' INTO _vessel_id;
--TODO RAISE WARNING 'jwt vessel_id %', current_setting('request.jwt.vid', true)::json->>'vid';
IF _role = 'user_role' THEN
-- Check the user exist in the accounts table
SELECT * INTO account_rec
FROM auth.accounts
WHERE auth.accounts.email = _email;
IF account_rec.email IS NULL THEN
RAISE EXCEPTION 'Invalid user'
USING HINT = 'Unknown user or password';
END IF;
-- Set session variables
PERFORM set_config('user.id', account_rec.user_id, false);
SELECT current_setting('request.path', true) into _path;
--RAISE WARNING 'req path %', current_setting('request.path', true);
-- Function allow without defined vessel like for anonymous role
IF _path ~ '^\/rpc\/(login|signup|recover|reset)$' THEN
RETURN;
END IF;
-- Function allow without defined vessel as user role
-- openapi doc, user settings, otp code and vessel registration
IF _path = '/rpc/settings_fn'
OR _path = '/rpc/register_vessel'
OR _path = '/rpc/update_user_preferences_fn'
OR _path = '/rpc/versions_fn'
OR _path = '/rpc/email_fn'
OR _path = '/' THEN
RETURN;
END IF;
-- Check a vessel and user exist
SELECT auth.vessels.* INTO vessel_rec
FROM auth.vessels, auth.accounts
WHERE auth.vessels.owner_email = auth.accounts.email
AND auth.accounts.email = _email;
-- check if boat exist yet?
IF vessel_rec.owner_email IS NULL THEN
-- Return http status code 551 with message
RAISE sqlstate 'PT551' using
message = 'Vessel Required',
detail = 'Invalid vessel',
hint = 'Unknown vessel';
--RETURN; -- ignore if not exist
END IF;
-- Redundant?
IF vessel_rec.vessel_id IS NULL THEN
RAISE EXCEPTION 'Invalid vessel'
USING HINT = 'Unknown vessel id';
END IF;
-- Set session variables
PERFORM set_config('vessel.id', vessel_rec.vessel_id, false);
PERFORM set_config('vessel.name', vessel_rec.name, false);
--RAISE WARNING 'public.check_jwt() user_role vessel.id [%]', current_setting('vessel.id', false);
--RAISE WARNING 'public.check_jwt() user_role vessel.name [%]', current_setting('vessel.name', false);
ELSIF _role = 'vessel_role' THEN
SELECT current_setting('request.path', true) into _path;
--RAISE WARNING 'req path %', current_setting('request.path', true);
-- Function allow without defined vessel like for anonymous role
IF _path ~ '^\/rpc\/(oauth_\w+)$' THEN
RETURN;
END IF;
-- Extract vessel_id from jwt token
SELECT current_setting('request.jwt.claims', true)::json->>'vid' INTO _vid;
-- Check the vessel and user exist
SELECT auth.vessels.* INTO vessel_rec
FROM auth.vessels, auth.accounts
WHERE auth.vessels.owner_email = auth.accounts.email
AND auth.accounts.email = _email
AND auth.vessels.vessel_id = _vid;
IF vessel_rec.owner_email IS NULL THEN
RAISE EXCEPTION 'Invalid vessel'
USING HINT = 'Unknown vessel owner_email';
END IF;
PERFORM set_config('vessel.id', vessel_rec.vessel_id, false);
PERFORM set_config('vessel.name', vessel_rec.name, false);
--RAISE WARNING 'public.check_jwt() user_role vessel.name %', current_setting('vessel.name', false);
--RAISE WARNING 'public.check_jwt() user_role vessel.id %', current_setting('vessel.id', false);
ELSIF _role = 'api_anonymous' THEN
--RAISE WARNING 'public.check_jwt() api_anonymous';
-- Check if path is a valid allow anonymous path
SELECT current_setting('request.path', true) ~ '^/(logs_view|log_view|rpc/timelapse_fn|rpc/timelapse2_fn|monitoring_view|stats_logs_view|stats_moorages_view|rpc/stats_logs_fn)$' INTO _ppath;
if _ppath is True then
-- Check is custom header is present and valid
SELECT current_setting('request.headers', true)::json->>'x-is-public' into _pheader;
RAISE WARNING 'public.check_jwt() api_anonymous _pheader [%]', _pheader;
if _pheader is null then
RAISE EXCEPTION 'Invalid public_header'
USING HINT = 'Stop being so evil and maybe you can log in';
end if;
SELECT convert_from(decode(_pheader, 'base64'), 'utf-8')
~ '\w+,public_(logs|logs_list|stats|timelapse|monitoring),\d+$' into _pvalid;
RAISE WARNING 'public.check_jwt() api_anonymous _pvalid [%]', _pvalid;
if _pvalid is null or _pvalid is False then
RAISE EXCEPTION 'Invalid public_valid'
USING HINT = 'Stop being so evil and maybe you can log in';
end if;
WITH regex AS (
SELECT regexp_match(
convert_from(
decode(_pheader, 'base64'), 'utf-8'),
'(\w+),(public_(logs|logs_list|stats|timelapse|monitoring)),(\d+)$') AS match
)
SELECT match[1], match[2], match[4] into _pvessel, _ptype, _pid
FROM regex;
RAISE WARNING 'public.check_jwt() api_anonymous [%] [%] [%]', _pvessel, _ptype, _pid;
if _pvessel is not null and _ptype is not null then
-- Everything seem fine, get the vessel_id base on the vessel name.
SELECT _ptype::name = any(enum_range(null::public_type)::name[]) INTO valid_public_type;
IF valid_public_type IS False THEN
-- Ignore entry if type is invalid
RAISE EXCEPTION 'Invalid public_type'
USING HINT = 'Stop being so evil and maybe you can log in';
END IF;
-- Check if boat name match public_vessel name
boat := '^' || _pvessel || '$';
IF _ptype ~ '^public_(logs|timelapse)$' AND _pid > 0 THEN
WITH log as (
SELECT vessel_id from api.logbook l where l.id = _pid
)
SELECT v.vessel_id, v.name into anonymous
FROM auth.accounts a, auth.vessels v, jsonb_each_text(a.preferences) as prefs, log l
WHERE v.vessel_id = l.vessel_id
AND a.email = v.owner_email
AND a.preferences->>'public_vessel'::text ~* boat
AND prefs.key = _ptype::TEXT
AND prefs.value::BOOLEAN = true;
RAISE WARNING '-> ispublic_fn public_logs output boat:[%], type:[%], result:[%]', _pvessel, _ptype, anonymous;
IF anonymous.vessel_id IS NOT NULL THEN
PERFORM set_config('vessel.id', anonymous.vessel_id, false);
PERFORM set_config('vessel.name', anonymous.name, false);
RETURN;
END IF;
ELSE
SELECT v.vessel_id, v.name into anonymous
FROM auth.accounts a, auth.vessels v, jsonb_each_text(a.preferences) as prefs
WHERE a.email = v.owner_email
AND a.preferences->>'public_vessel'::text ~* boat
AND prefs.key = _ptype::TEXT
AND prefs.value::BOOLEAN = true;
RAISE WARNING '-> ispublic_fn output boat:[%], type:[%], result:[%]', _pvessel, _ptype, anonymous;
IF anonymous.vessel_id IS NOT NULL THEN
PERFORM set_config('vessel.id', anonymous.vessel_id, false);
PERFORM set_config('vessel.name', anonymous.name, false);
RETURN;
END IF;
END IF;
RAISE sqlstate 'PT404' using message = 'unknown resource';
END IF; -- end anonymous path
END IF;
ELSIF _role <> 'api_anonymous' THEN
RAISE EXCEPTION 'Invalid role'
USING HINT = 'Stop being so evil and maybe you can log in';
END IF;
END
$$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
public.check_jwt
IS 'PostgREST API db-pre-request check, set_config according to role (api_anonymous,vessel_role,user_role)';
GRANT EXECUTE ON FUNCTION public.check_jwt() TO api_anonymous;
-- Update version
UPDATE public.app_settings
SET value='0.7.2'
WHERE "name"='app.version';

View File

@@ -0,0 +1,784 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration May 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
VALUES ('account_disable','PostgSail Account disable',E'Hello __RECIPIENT__,\nSorry!Your account is disable. Please contact me to solve the issue.','PostgSail Account disable!',E'Sorry!\nYour account is disable. Please contact me to solve the issue.');
-- Check if user is disable due to abuse
-- Track IP per user to avoid abuse
create or replace function
api.login(in email text, in pass text) returns auth.jwt_token as $$
declare
_role name;
result auth.jwt_token;
app_jwt_secret text;
_email_valid boolean := false;
_email text := email;
_user_id text := null;
_user_disable boolean := false;
headers json := current_setting('request.headers', true)::json;
client_ip text := coalesce(headers->>'x-client-ip', NULL);
begin
-- check email and password
select auth.user_role(email, pass) into _role;
if _role is null then
-- HTTP/403
--raise invalid_password using message = 'invalid user or password';
-- HTTP/401
raise insufficient_privilege using message = 'invalid user or password';
end if;
-- Check if user is disable due to abuse
SELECT preferences['disable'],user_id INTO _user_disable,_user_id
FROM auth.accounts a
WHERE a.email = _email;
IF _user_disable is True then
-- due to the raise, the insert is never committed.
--INSERT INTO process_queue (channel, payload, stored, ref_id)
-- VALUES ('account_disable', _email, now(), _user_id);
RAISE sqlstate 'PT402' using message = 'Account disable, contact us',
detail = 'Quota exceeded',
hint = 'Upgrade your plan';
END IF;
-- Check email_valid and generate OTP
SELECT preferences['email_valid'],user_id INTO _email_valid,_user_id
FROM auth.accounts a
WHERE a.email = _email;
IF _email_valid is null or _email_valid is False THEN
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('email_otp', _email, now(), _user_id);
END IF;
-- Track IP per user to avoid abuse
--RAISE WARNING 'api.login debug: [%],[%]', client_ip, login.email;
IF client_ip IS NOT NULL THEN
UPDATE auth.accounts a SET preferences = jsonb_recursive_merge(a.preferences, jsonb_build_object('ip', client_ip)) WHERE a.email = login.email;
END IF;
-- Get app_jwt_secret
SELECT value INTO app_jwt_secret
FROM app_settings
WHERE name = 'app.jwt_secret';
--RAISE WARNING 'api.login debug: [%],[%],[%]', app_jwt_secret, _role, login.email;
-- Generate jwt
select jwt.sign(
-- row_to_json(r), ''
-- row_to_json(r)::json, current_setting('app.jwt_secret')::text
row_to_json(r)::json, app_jwt_secret
) as token
from (
select _role as role, login.email as email, -- TODO replace with user_id
-- select _role as role, user_id as uid, -- add support in check_jwt
extract(epoch from now())::integer + 60*60 as exp
) r
into result;
return result;
end;
$$ language plpgsql security definer;
-- Add moorage name to view
DROP VIEW IF EXISTS api.moorages_stays_view;
CREATE OR REPLACE VIEW api.moorages_stays_view WITH (security_invoker=true,security_barrier=true) AS
select
_to.name AS _to_name,
_to.id AS _to_id,
_to._to_time,
_from.id AS _from_id,
_from.name AS _from_name,
_from._from_time,
s.stay_code,s.duration,m.id,m.name
FROM api.stays_at sa, api.moorages m, api.stays s
LEFT JOIN api.logbook AS _from ON _from._from_time = s.departed
LEFT JOIN api.logbook AS _to ON _to._to_time = s.arrived
WHERE s.departed IS NOT NULL
AND s.name IS NOT NULL
AND s.stay_code = sa.stay_code
AND s.moorage_id = m.id
ORDER BY _to._to_time DESC;
-- Description
COMMENT ON VIEW
api.moorages_stays_view
IS 'Moorages stay listing web view';
-- Create a merge_logbook_fn
CREATE OR REPLACE FUNCTION api.merge_logbook_fn(IN id_start integer, IN id_end integer) RETURNS void AS $merge_logbook$
DECLARE
logbook_rec_start record;
logbook_rec_end record;
log_name text;
avg_rec record;
geo_rec record;
geojson jsonb;
extra_json jsonb;
BEGIN
-- If id_start or id_end is not NULL
IF (id_start IS NULL OR id_start < 1) OR (id_end IS NULL OR id_end < 1) THEN
RAISE WARNING '-> merge_logbook_fn invalid input % %', id_start, id_end;
RETURN;
END IF;
-- If id_end is lower than id_start
IF id_end <= id_start THEN
RAISE WARNING '-> merge_logbook_fn invalid input % < %', id_end, id_start;
RETURN;
END IF;
-- Get the start logbook record with all necessary fields exist
SELECT * INTO logbook_rec_start
FROM api.logbook
WHERE active IS false
AND id = id_start
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec_start.vessel_id IS NULL THEN
RAISE WARNING '-> merge_logbook_fn invalid logbook %', id_start;
RETURN;
END IF;
-- Get the end logbook record with all necessary fields exist
SELECT * INTO logbook_rec_end
FROM api.logbook
WHERE active IS false
AND id = id_end
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec_end.vessel_id IS NULL THEN
RAISE WARNING '-> merge_logbook_fn invalid logbook %', id_end;
RETURN;
END IF;
RAISE WARNING '-> merge_logbook_fn logbook start:% end:%', id_start, id_end;
PERFORM set_config('vessel.id', logbook_rec_start.vessel_id, false);
-- Calculate logbook data average and geo
-- Update logbook entry with the latest metric data and calculate data
avg_rec := logbook_update_avg_fn(logbook_rec_start.id, logbook_rec_start._from_time::TEXT, logbook_rec_end._to_time::TEXT);
geo_rec := logbook_update_geom_distance_fn(logbook_rec_start.id, logbook_rec_start._from_time::TEXT, logbook_rec_end._to_time::TEXT);
-- Process `propulsion.*.runTime` and `navigation.log`
-- Calculate extra json
extra_json := logbook_update_extra_json_fn(logbook_rec_start.id, logbook_rec_start._from_time::TEXT, logbook_rec_end._to_time::TEXT);
-- add the avg_wind_speed
extra_json := extra_json || jsonb_build_object('avg_wind_speed', avg_rec.avg_wind_speed);
-- generate logbook name, concat _from_location and _to_location from moorage name
SELECT CONCAT(logbook_rec_start._from, ' to ', logbook_rec_end._to) INTO log_name;
RAISE NOTICE 'Updating valid logbook entry logbook id:[%] start:[%] end:[%]', logbook_rec_start.id, logbook_rec_start._from_time, logbook_rec_end._to_time;
UPDATE api.logbook
SET
-- Update the start logbook with the new calculate metrics
duration = (logbook_rec_end._to_time::TIMESTAMPTZ - logbook_rec_start._from_time::TIMESTAMPTZ),
avg_speed = avg_rec.avg_speed,
max_speed = avg_rec.max_speed,
max_wind_speed = avg_rec.max_wind_speed,
name = log_name,
track_geom = geo_rec._track_geom,
distance = geo_rec._track_distance,
extra = extra_json,
-- Set _to metrics from end logbook
_to = logbook_rec_end._to,
_to_moorage_id = logbook_rec_end._to_moorage_id,
_to_lat = logbook_rec_end._to_lat,
_to_lng = logbook_rec_end._to_lng,
_to_time = logbook_rec_end._to_time
WHERE id = logbook_rec_start.id;
-- GeoJSON require track_geom field
geojson := logbook_update_geojson_fn(logbook_rec_start.id, logbook_rec_start._from_time::TEXT, logbook_rec_end._to_time::TEXT);
UPDATE api.logbook
SET
track_geojson = geojson
WHERE id = logbook_rec_start.id;
-- Update logbook mark for deletion
UPDATE api.logbook
SET notes = 'mark for deletion'
WHERE id = logbook_rec_end.id;
-- Update related stays mark for deletion
UPDATE api.stays
SET notes = 'mark for deletion'
WHERE arrived = logbook_rec_start._to_time;
-- Update related moorages mark for deletion
UPDATE api.moorages
SET notes = 'mark for deletion'
WHERE id = logbook_rec_start._to_moorage_id;
-- Clean up, remove invalid logbook and stay, moorage entry
DELETE FROM api.logbook WHERE id = logbook_rec_end.id;
RAISE WARNING '-> merge_logbook_fn delete logbook id [%]', logbook_rec_end.id;
DELETE FROM api.stays WHERE arrived = logbook_rec_start._to_time;
RAISE WARNING '-> merge_logbook_fn delete stay arrived [%]', logbook_rec_start._to_time;
DELETE FROM api.moorages WHERE id = logbook_rec_start._to_moorage_id;
RAISE WARNING '-> merge_logbook_fn delete moorage id [%]', logbook_rec_start._to_moorage_id;
END;
$merge_logbook$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.merge_logbook_fn
IS 'Merge 2 logbook by id, from the start of the lower log id and the end of the higher log id, update the calculate data as well (avg, geojson)';
-- Add tags to view
DROP VIEW IF EXISTS api.logs_view;
CREATE OR REPLACE VIEW api.logs_view
WITH(security_invoker=true,security_barrier=true)
AS SELECT id,
name,
_from AS "from",
_from_time AS started,
_to AS "to",
_to_time AS ended,
distance,
duration,
_from_moorage_id,
_to_moorage_id,
extra->'tags' AS tags
FROM api.logbook l
WHERE name IS NOT NULL AND _to_time IS NOT NULL
ORDER BY _from_time DESC;
-- Description
COMMENT ON VIEW api.logs_view IS 'Logs web view';
-- Update a logbook with avg wind speed
DROP FUNCTION IF EXISTS public.logbook_update_avg_fn;
CREATE OR REPLACE FUNCTION public.logbook_update_avg_fn(
IN _id integer,
IN _start TEXT,
IN _end TEXT,
OUT avg_speed double precision,
OUT max_speed double precision,
OUT max_wind_speed double precision,
OUT avg_wind_speed double precision,
OUT count_metric integer
) AS $logbook_update_avg$
BEGIN
RAISE NOTICE '-> logbook_update_avg_fn calculate avg for logbook id=%, start:"%", end:"%"', _id, _start, _end;
SELECT AVG(speedoverground), MAX(speedoverground), MAX(windspeedapparent), AVG(windspeedapparent), COUNT(*) INTO
avg_speed, max_speed, max_wind_speed, avg_wind_speed, count_metric
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND m.time >= _start::TIMESTAMPTZ
AND m.time <= _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false);
RAISE NOTICE '-> logbook_update_avg_fn avg for logbook id=%, avg_speed:%, max_speed:%, avg_wind_speed:%, max_wind_speed:%, count:%', _id, avg_speed, max_speed, avg_wind_speed, max_wind_speed, count_metric;
END;
$logbook_update_avg$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_update_avg_fn
IS 'Update logbook details with calculate average and max data, AVG(speedOverGround), MAX(speedOverGround), MAX(windspeedapparent), count_metric';
-- Update pending new logbook from process queue
DROP FUNCTION IF EXISTS process_logbook_queue_fn;
CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void AS $process_logbook_queue$
DECLARE
logbook_rec record;
from_name text;
to_name text;
log_name text;
from_moorage record;
to_moorage record;
avg_rec record;
geo_rec record;
log_settings jsonb;
user_settings jsonb;
geojson jsonb;
extra_json jsonb;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> process_logbook_queue_fn invalid input %', _id;
RETURN;
END IF;
-- Get the logbook record with all necessary fields exist
SELECT * INTO logbook_rec
FROM api.logbook
WHERE active IS false
AND id = _id
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> process_logbook_queue_fn invalid logbook %', _id;
RETURN;
END IF;
PERFORM set_config('vessel.id', logbook_rec.vessel_id, false);
--RAISE WARNING 'public.process_logbook_queue_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Calculate logbook data average and geo
-- Update logbook entry with the latest metric data and calculate data
avg_rec := logbook_update_avg_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
geo_rec := logbook_update_geom_distance_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
-- Do we have an existing moorage within 300m of the new log
-- generate logbook name, concat _from_location and _to_location from moorage name
from_moorage := process_lat_lon_fn(logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
to_moorage := process_lat_lon_fn(logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC);
SELECT CONCAT(from_moorage.moorage_name, ' to ' , to_moorage.moorage_name) INTO log_name;
-- Process `propulsion.*.runTime` and `navigation.log`
-- Calculate extra json
extra_json := logbook_update_extra_json_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
-- add the avg_wind_speed
extra_json := extra_json || jsonb_build_object('avg_wind_speed', avg_rec.avg_wind_speed);
RAISE NOTICE 'Updating valid logbook entry logbook id:[%] start:[%] end:[%]', logbook_rec.id, logbook_rec._from_time, logbook_rec._to_time;
UPDATE api.logbook
SET
duration = (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ),
avg_speed = avg_rec.avg_speed,
max_speed = avg_rec.max_speed,
max_wind_speed = avg_rec.max_wind_speed,
_from = from_moorage.moorage_name,
_from_moorage_id = from_moorage.moorage_id,
_to_moorage_id = to_moorage.moorage_id,
_to = to_moorage.moorage_name,
name = log_name,
track_geom = geo_rec._track_geom,
distance = geo_rec._track_distance,
extra = extra_json,
notes = NULL -- reset pre_log process
WHERE id = logbook_rec.id;
-- GeoJSON require track_geom field geometry linestring
geojson := logbook_update_geojson_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
UPDATE api.logbook
SET
track_geojson = geojson
WHERE id = logbook_rec.id;
-- GeoJSON Timelapse require track_geojson geometry point
-- Add properties to the geojson for timelapse purpose
PERFORM public.logbook_timelapse_geojson_fn(logbook_rec.id);
-- Add post logbook entry to process queue for notification and QGIS processing
-- Require as we need the logbook to be updated with SQL commit
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('post_logbook', logbook_rec.id, NOW(), current_setting('vessel.id', true));
END;
$process_logbook_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_logbook_queue_fn
IS 'Update logbook details when completed, logbook_update_avg_fn, logbook_update_geom_distance_fn, reverse_geocode_py_fn';
-- Add avg_wind_speed to logbook geojson
-- Add back truewindspeed and truewinddirection to logbook geojson
DROP FUNCTION IF EXISTS public.logbook_update_geojson_fn;
CREATE FUNCTION public.logbook_update_geojson_fn(IN _id integer, IN _start text, IN _end text,
OUT _track_geojson JSON
) AS $logbook_geojson$
declare
log_geojson jsonb;
metrics_geojson jsonb;
_map jsonb;
begin
-- GeoJson Feature Logbook linestring
SELECT
ST_AsGeoJSON(log.*) into log_geojson
FROM
( SELECT
id,name,
distance,
duration,
avg_speed,
max_speed,
max_wind_speed,
_from_time,
_to_time
_from_moorage_id,
_to_moorage_id,
notes,
extra['avg_wind_speed'] as avg_wind_speed,
track_geom
FROM api.logbook
WHERE id = _id
) AS log;
-- GeoJson Feature Metrics point
SELECT
json_agg(ST_AsGeoJSON(t.*)::json) into metrics_geojson
FROM (
( SELECT
time,
courseovergroundtrue,
speedoverground,
windspeedapparent,
longitude,latitude,
'' AS notes,
coalesce(metersToKnots((metrics->'environment.wind.speedTrue')::NUMERIC), null) as truewindspeed,
coalesce(radiantToDegrees((metrics->'environment.wind.directionTrue')::NUMERIC), null) as truewinddirection,
coalesce(status, null) as status,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND time >= _start::TIMESTAMPTZ
AND time <= _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
ORDER BY m.time ASC
)
) AS t;
-- Merge jsonb
SELECT log_geojson::jsonb || metrics_geojson::jsonb into _map;
-- output
SELECT
json_build_object(
'type', 'FeatureCollection',
'features', _map
) into _track_geojson;
END;
$logbook_geojson$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_update_geojson_fn
IS 'Update log details with geojson';
-- Add properties to the geojson for timelapse purpose
DROP FUNCTION IF EXISTS public.logbook_timelapse_geojson_fn;
CREATE FUNCTION public.logbook_timelapse_geojson_fn(IN _id INT) returns void
AS $logbook_timelapse$
declare
first_feature_note JSONB;
second_feature_note JSONB;
last_feature_note JSONB;
logbook_rec record;
begin
-- We need to fetch the processed logbook data.
SELECT name,duration,distance,_from,_to INTO logbook_rec
FROM api.logbook
WHERE active IS false
AND id = _id
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
--raise warning '-> logbook_rec: %', logbook_rec;
select format('{"trip": { "name": "%s", "duration": "%s", "distance": "%s" }}', logbook_rec.name, logbook_rec.duration, logbook_rec.distance) into first_feature_note;
select format('{"notes": "%s"}', logbook_rec._from) into second_feature_note;
select format('{"notes": "%s"}', logbook_rec._to) into last_feature_note;
--raise warning '-> logbook_rec: % % %', first_feature_note, second_feature_note, last_feature_note;
-- Update the properties of the first feature, the second with geometry point
UPDATE api.logbook
SET track_geojson = jsonb_set(
track_geojson,
'{features, 1, properties}',
(track_geojson -> 'features' -> 1 -> 'properties' || first_feature_note)::jsonb
)
WHERE id = _id
and track_geojson -> 'features' -> 1 -> 'geometry' ->> 'type' = 'Point';
-- Update the properties of the third feature, the second with geometry point
UPDATE api.logbook
SET track_geojson = jsonb_set(
track_geojson,
'{features, 2, properties}',
(track_geojson -> 'features' -> 2 -> 'properties' || second_feature_note)::jsonb
)
where id = _id
and track_geojson -> 'features' -> 2 -> 'geometry' ->> 'type' = 'Point';
-- Update the properties of the last feature with geometry point
UPDATE api.logbook
SET track_geojson = jsonb_set(
track_geojson,
'{features, -1, properties}',
CASE
WHEN COALESCE((track_geojson -> 'features' -> -1 -> 'properties' ->> 'notes'), '') = '' THEN
(track_geojson -> 'features' -> -1 -> 'properties' || last_feature_note)::jsonb
ELSE
track_geojson -> 'features' -> -1 -> 'properties'
END
)
WHERE id = _id
and track_geojson -> 'features' -> -1 -> 'geometry' ->> 'type' = 'Point';
end;
$logbook_timelapse$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_timelapse_geojson_fn
IS 'Update logbook geojson, Add properties to some geojson features for timelapse purpose';
-- CRON for signalk plugin upgrade
-- The goal is to avoid error from old plugin version by enforcing upgrade.
-- ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
-- "POST /metadata?on_conflict=client_id HTTP/1.1" 400 137 "-" "postgsail.signalk v0.0.9"
DROP FUNCTION IF EXISTS public.cron_process_skplugin_upgrade_fn;
CREATE FUNCTION public.cron_process_skplugin_upgrade_fn() RETURNS void AS $skplugin_upgrade$
DECLARE
skplugin_upgrade_rec record;
user_settings jsonb;
BEGIN
-- Check for signalk plugin version
RAISE NOTICE 'cron_process_plugin_upgrade_fn';
FOR skplugin_upgrade_rec in
SELECT
v.owner_email,m.name,m.vessel_id,m.plugin_version,a.first
FROM api.metadata m
LEFT JOIN auth.vessels v ON v.vessel_id = m.vessel_id
LEFT JOIN auth.accounts a ON v.owner_email = a.email
WHERE m.plugin_version <= '0.3.0'
LOOP
RAISE NOTICE '-> cron_process_skplugin_upgrade_rec_fn for [%]', skplugin_upgrade_rec;
SELECT json_build_object('email', skplugin_upgrade_rec.owner_email, 'recipient', skplugin_upgrade_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_skplugin_upgrade_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('skplugin_upgrade'::TEXT, user_settings::JSONB);
END LOOP;
END;
$skplugin_upgrade$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_skplugin_upgrade_fn
IS 'init by pg_cron, check for signalk plugin version and notify for upgrade';
INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
VALUES ('skplugin_upgrade','PostgSail Signalk plugin upgrade',E'Hello __RECIPIENT__,\nPlease upgrade your postgsail signalk plugin. Be sure to contact me if you encounter any issue.','PostgSail Signalk plugin upgrade!',E'Please upgrade your postgsail signalk plugin.');
DROP FUNCTION IF EXISTS public.metadata_ip_trigger_fn;
-- Track IP per vessel to avoid abuse
CREATE FUNCTION public.metadata_ip_trigger_fn() RETURNS trigger
AS $metadata_ip_trigger$
DECLARE
headers json := current_setting('request.headers', true)::json;
client_ip text := coalesce(headers->>'x-client-ip', NULL);
BEGIN
RAISE WARNING 'metadata_ip_trigger_fn [%] [%]', current_setting('vessel.id', true), client_ip;
IF client_ip IS NOT NULL THEN
UPDATE api.metadata
SET
configuration = NEW.configuration || jsonb_build_object('ip', client_ip)
WHERE id = NEW.id;
END IF;
RETURN NULL;
END;
$metadata_ip_trigger$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION public.metadata_ip_trigger_fn() IS 'Add IP from vessel in metadata, track abuse';
DROP TRIGGER IF EXISTS metadata_ip_trigger ON api.metadata;
-- Generate an error
--CREATE TRIGGER metadata_ip_trigger BEFORE UPDATE ON api.metadata
-- FOR EACH ROW EXECUTE FUNCTION metadata_ip_trigger_fn();
-- Description
--COMMENT ON TRIGGER
-- metadata_ip_trigger ON api.metadata
-- IS 'AFTER UPDATE ON api.metadata run function metadata_ip_trigger_fn for tracking vessel IP';
DROP FUNCTION IF EXISTS public.logbook_active_geojson_fn;
CREATE FUNCTION public.logbook_active_geojson_fn(
OUT _track_geojson jsonb
) AS $logbook_active_geojson$
BEGIN
WITH log_active AS (
SELECT * FROM api.logbook WHERE active IS True
),
log_gis_line AS (
SELECT ST_MakeLine(
ARRAY(
SELECT st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m, log_active l
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND m.time >= l._from_time::TIMESTAMPTZ
AND m.time <= l._to_time::TIMESTAMPTZ
ORDER BY m.time ASC
)
)
),
log_gis_point AS (
SELECT
ST_AsGeoJSON(t.*)::json AS GeoJSONPoint
FROM (
( SELECT
time,
courseovergroundtrue,
speedoverground,
windspeedapparent,
longitude,latitude,
'' AS notes,
coalesce(metersToKnots((metrics->'environment.wind.speedTrue')::NUMERIC), null) as truewindspeed,
coalesce(radiantToDegrees((metrics->'environment.wind.directionTrue')::NUMERIC), null) as truewinddirection,
coalesce(status, null) AS status,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
ORDER BY m.time DESC LIMIT 1
)
) as t
),
log_agg as (
SELECT
CASE WHEN log_gis_line.st_makeline IS NOT NULL THEN
( SELECT jsonb_agg(ST_AsGeoJSON(log_gis_line.*)::json)::jsonb AS GeoJSONLine FROM log_gis_line )
ELSE
( SELECT '[]'::json AS GeoJSONLine )::jsonb
END
FROM log_gis_line
)
SELECT
jsonb_build_object(
'type', 'FeatureCollection',
'features', log_agg.GeoJSONLine::jsonb || log_gis_point.GeoJSONPoint::jsonb
) INTO _track_geojson FROM log_agg, log_gis_point;
END;
$logbook_active_geojson$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_active_geojson_fn
IS 'Create a GeoJSON with 2 features, LineString with a current active log and Point with the last position';
-- Update monitoring view to support live trip and truewindspeed and truewinddirection to stationary GeoJSON.
DROP VIEW IF EXISTS api.monitoring_view;
CREATE VIEW api.monitoring_view WITH (security_invoker=true,security_barrier=true) AS
SELECT
time AS "time",
(NOW() AT TIME ZONE 'UTC' - time) > INTERVAL '70 MINUTES' as offline,
metrics-> 'environment.water.temperature' AS waterTemperature,
metrics-> 'environment.inside.temperature' AS insideTemperature,
metrics-> 'environment.outside.temperature' AS outsideTemperature,
metrics-> 'environment.wind.speedOverGround' AS windSpeedOverGround,
metrics-> 'environment.wind.directionTrue' AS windDirectionTrue,
metrics-> 'environment.inside.relativeHumidity' AS insideHumidity,
metrics-> 'environment.outside.relativeHumidity' AS outsideHumidity,
metrics-> 'environment.outside.pressure' AS outsidePressure,
metrics-> 'environment.inside.pressure' AS insidePressure,
metrics-> 'electrical.batteries.House.capacity.stateOfCharge' AS batteryCharge,
metrics-> 'electrical.batteries.House.voltage' AS batteryVoltage,
metrics-> 'environment.depth.belowTransducer' AS depth,
jsonb_build_object(
'type', 'Feature',
'geometry', ST_AsGeoJSON(st_makepoint(longitude,latitude))::jsonb,
'properties', jsonb_build_object(
'name', current_setting('vessel.name', false),
'latitude', m.latitude,
'longitude', m.longitude,
'time', m.time,
'speedoverground', m.speedoverground,
'windspeedapparent', m.windspeedapparent,
'truewindspeed', coalesce(metersToKnots((metrics->'environment.wind.speedTrue')::NUMERIC), null),
'truewinddirection', coalesce(radiantToDegrees((metrics->'environment.wind.directionTrue')::NUMERIC), null),
'status', coalesce(m.status, null)
)::jsonb ) AS geojson,
current_setting('vessel.name', false) AS name,
m.status,
CASE WHEN m.status <> 'moored' THEN (
SELECT public.logbook_active_geojson_fn() )
END AS live
FROM api.metrics m
ORDER BY time DESC LIMIT 1;
-- Description
COMMENT ON VIEW
api.monitoring_view
IS 'Monitoring static web view';
-- Allow to access tables for user_role and grafana and api_anonymous
GRANT SELECT ON ALL TABLES IN SCHEMA api TO user_role;
GRANT SELECT ON ALL TABLES IN SCHEMA api TO grafana;
GRANT SELECT ON TABLE api.monitoring_view TO user_role;
GRANT SELECT ON TABLE api.monitoring_view TO api_anonymous;
GRANT SELECT ON TABLE api.monitoring_view TO grafana;
-- Allow to execute fn for user_role and grafana and api_anonymous
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO grafana;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO grafana;
GRANT EXECUTE ON FUNCTION public.logbook_active_geojson_fn TO api_anonymous;
GRANT EXECUTE ON FUNCTION public.metersToKnots TO api_anonymous;
GRANT EXECUTE ON FUNCTION public.radiantToDegrees TO api_anonymous;
-- Fix vessel name (Organization) ensure we have a value either from metadata tbl (signalk) or from vessel tbl
DROP FUNCTION IF EXISTS public.cron_process_grafana_fn;
CREATE OR REPLACE FUNCTION public.cron_process_grafana_fn() RETURNS void
AS $cron_process_grafana_fn$
DECLARE
process_rec record;
data_rec record;
app_settings jsonb;
user_settings jsonb;
BEGIN
-- We run grafana provisioning only after the first received vessel metadata
-- Check for new vessel metadata pending grafana provisioning
RAISE NOTICE 'cron_process_grafana_fn';
FOR process_rec in
SELECT * from process_queue
where channel = 'grafana' and processed is null
order by stored asc
LOOP
RAISE NOTICE '-> cron_process_grafana_fn [%]', process_rec.payload;
-- Gather url from app settings
app_settings := get_app_settings_fn();
-- Get vessel details base on metadata id
SELECT
v.owner_email,coalesce(m.name,v.name) as name,m.vessel_id into data_rec
FROM auth.vessels v
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.id = process_rec.payload::INTEGER;
IF data_rec.vessel_id IS NULL OR data_rec.name IS NULL THEN
RAISE WARNING '-> DEBUG cron_process_grafana_fn grafana_py_fn error [%]', data_rec;
RETURN;
END IF;
-- as we got data from the vessel we can do the grafana provisioning.
RAISE DEBUG '-> DEBUG cron_process_grafana_fn grafana_py_fn provisioning [%]', data_rec;
PERFORM grafana_py_fn(data_rec.name, data_rec.vessel_id, data_rec.owner_email, app_settings);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(data_rec.vessel_id::TEXT);
RAISE DEBUG '-> DEBUG cron_process_grafana_fn get_user_settings_from_vesselid_fn [%]', user_settings;
-- add user in keycloak
PERFORM keycloak_auth_py_fn(data_rec.vessel_id, user_settings, app_settings);
-- Send notification
PERFORM send_notification_fn('grafana'::TEXT, user_settings::JSONB);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_grafana_fn updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$cron_process_grafana_fn$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_grafana_fn
IS 'init by pg_cron to check for new vessel pending grafana provisioning, if so perform grafana_py_fn';
-- Update version
UPDATE public.app_settings
SET value='0.7.3'
WHERE "name"='app.version';
\c postgres
-- Notifications/Reminders for old signalk plugin
-- At 08:06 on Sunday.
-- At 08:06 on every 4th day-of-month if it's on Sunday.
SELECT cron.schedule('cron_skplugin_upgrade', '6 8 */4 * 0', 'select public.cron_process_skplugin_upgrade_fn()');
UPDATE cron.job SET database = 'postgres' WHERE jobname = 'cron_skplugin_upgrade';

View File

@@ -0,0 +1,879 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration June 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Add video timelapse notification message
INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
VALUES ('video_ready','PostgSail Video ready',E'Hey,\nYour video is available at __VIDEO_LINK__.\nPlease make sure you download your video as it will delete in 7 days.','PostgSail Video ready!',E'Your video is ready __VIDEO_LINK__.');
-- Generate and request the logbook image url to be cache on QGIS server.
DROP FUNCTION IF EXISTS public.qgis_getmap_py_fn;
CREATE OR REPLACE FUNCTION public.qgis_getmap_py_fn(IN vessel_id TEXT DEFAULT NULL, IN log_id NUMERIC DEFAULT NULL, IN extent TEXT DEFAULT NULL, IN logs_url BOOLEAN DEFAULT False) RETURNS VOID
AS $qgis_getmap_py$
import requests
# Extract extent
def parse_extent_from_db(extent_raw):
# Parse the extent_raw to extract coordinates
extent = extent_raw.replace('BOX(', '').replace(')', '').split(',')
min_x, min_y = map(float, extent[0].split())
max_x, max_y = map(float, extent[1].split())
return min_x, min_y, max_x, max_y
# ZoomOut from linestring extent
def apply_scale_factor(extent, scale_factor=1.125):
min_x, min_y, max_x, max_y = extent
center_x = (min_x + max_x) / 2
center_y = (min_y + max_y) / 2
width = max_x - min_x
height = max_y - min_y
new_width = width * scale_factor
new_height = height * scale_factor
scaled_extent = (
round(center_x - new_width / 2),
round(center_y - new_height / 2),
round(center_x + new_width / 2),
round(center_y + new_height / 2),
)
return scaled_extent
def adjust_image_size_to_bbox(extent, width, height):
min_x, min_y, max_x, max_y = extent
bbox_aspect_ratio = (max_x - min_x) / (max_y - min_y)
image_aspect_ratio = width / height
if bbox_aspect_ratio > image_aspect_ratio:
# Adjust height to match aspect ratio
height = width / bbox_aspect_ratio
else:
# Adjust width to match aspect ratio
width = height * bbox_aspect_ratio
return int(width), int(height)
def calculate_width(extent, fixed_height):
min_x, min_y, max_x, max_y = extent
bbox_aspect_ratio = (max_x - min_x) / (max_y - min_y)
width = fixed_height * bbox_aspect_ratio
return int(width)
def adjust_bbox_to_fixed_size(scaled_extent, fixed_width, fixed_height):
min_x, min_y, max_x, max_y = scaled_extent
bbox_width = max_x - min_x
bbox_height = max_y - min_y
bbox_aspect_ratio = bbox_width / bbox_height
image_aspect_ratio = fixed_width / fixed_height
if bbox_aspect_ratio > image_aspect_ratio:
# Adjust height to match aspect ratio
new_bbox_height = bbox_width / image_aspect_ratio
height_diff = new_bbox_height - bbox_height
min_y -= height_diff / 2
max_y += height_diff / 2
else:
# Adjust width to match aspect ratio
new_bbox_width = bbox_height * image_aspect_ratio
width_diff = new_bbox_width - bbox_width
min_x -= width_diff / 2
max_x += width_diff / 2
adjusted_extent = (min_x, min_y, max_x, max_y)
return adjusted_extent
def generate_getmap_url(server_url, project_path, layer_name, extent, width=1080, height=566, crs="EPSG:3857", format="image/png"):
min_x, min_y, max_x, max_y = extent
bbox = f"{min_x},{min_y},{max_x},{max_y}"
# Adjust image size to match BBOX aspect ratio
#width, height = adjust_image_size_to_bbox(extent, width, height)
# Calculate width to maintain aspect ratio with fixed height
#width = calculate_width(extent, height)
url = (
f"{server_url}?SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&FORMAT={format}&CRS={crs}"
f"&BBOX={bbox}&WIDTH={width}&HEIGHT={height}&LAYERS={layer_name}&MAP={project_path}"
)
return url
if logs_url == False:
server_url = f"https://gis.openplotter.cloud/log_{vessel_id}_{log_id}.png".format(vessel_id, log_id)
else:
server_url = f"https://gis.openplotter.cloud/logs_{vessel_id}_{log_id}.png".format(vessel_id, log_id)
project_path = "/projects/postgsail5.qgz"
layer_name = "OpenStreetMap,SQLLayer"
#plpy.notice('qgis_getmap_py vessel_id [{}], log_id [{}], extent [{}]'.format(vessel_id, log_id, extent))
# Parse extent and scale factor
scaled_extent = apply_scale_factor(parse_extent_from_db(extent))
#plpy.notice('qgis_getmap_py scaled_extent [{}]'.format(scaled_extent))
fixed_width = 1080
fixed_height = 566
adjusted_extent = adjust_bbox_to_fixed_size(scaled_extent, fixed_width, fixed_height)
#plpy.notice('qgis_getmap_py adjusted_extent [{}]'.format(adjusted_extent))
getmap_url = generate_getmap_url(server_url, project_path, layer_name, adjusted_extent)
if logs_url == False:
filter_url = f"{getmap_url}&FILTER=SQLLayer:\"vessel_id\" = '{vessel_id}' AND \"id\" = {log_id}".format(getmap_url, vessel_id, log_id)
else:
filter_url = f"{getmap_url}&FILTER=SQLLayer:\"vessel_id\" = '{vessel_id}'".format(getmap_url, vessel_id)
#plpy.notice('qgis_getmap_py getmap_url [{}]'.format(filter_url))
# Fetch image to be cache in qgis server
headers = {"User-Agent": "PostgSail", "From": "xbgmsharp@gmail.com"}
r = requests.get(filter_url, headers=headers, timeout=100)
# Parse response
if r.status_code != 200:
plpy.warning('Failed to get WMS image, url[{}]'.format(filter_url))
$qgis_getmap_py$ LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.qgis_getmap_py_fn
IS 'Generate a log map, to generate the cache data for faster access later';
-- Generate the logbook extent for the logbook image to access the QGIS server.
DROP FUNCTION IF EXISTS public.qgis_bbox_py_fn;
CREATE OR REPLACE FUNCTION public.qgis_bbox_py_fn(IN vessel_id TEXT DEFAULT NULL, IN log_id NUMERIC DEFAULT NULL, IN width NUMERIC DEFAULT 1080, IN height NUMERIC DEFAULT 566, IN scaleout BOOLEAN DEFAULT True, OUT bbox TEXT)
AS $qgis_bbox_py$
log_extent = None
# If we have a vessel_id then it is logs image map
if vessel_id:
# Use the shared cache to avoid preparing the log extent
if vessel_id in SD:
plan = SD[vessel_id]
# A prepared statement from Python
else:
plan = plpy.prepare("WITH merged AS ( SELECT ST_Union(track_geom) AS merged_geometry FROM api.logbook WHERE vessel_id = $1 ) SELECT ST_Extent(ST_Transform(merged_geometry, 3857))::TEXT FROM merged;", ["text"])
SD[vessel_id] = plan
# Execute the statement with the log extent param and limit to 1 result
rv = plpy.execute(plan, [vessel_id], 1)
log_extent = rv[0]['st_extent']
# Else we have a log_id then it is single log image map
else:
# Use the shared cache to avoid preparing the log extent
if log_id in SD:
plan = SD[log_id]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT ST_Extent(ST_Transform(track_geom, 3857)) FROM api.logbook WHERE id = $1::NUMERIC", ["text"])
SD[log_id] = plan
# Execute the statement with the log extent param and limit to 1 result
rv = plpy.execute(plan, [log_id], 1)
log_extent = rv[0]['st_extent']
# Extract extent
def parse_extent_from_db(extent_raw):
# Parse the extent_raw to extract coordinates
extent = extent_raw.replace('BOX(', '').replace(')', '').split(',')
min_x, min_y = map(float, extent[0].split())
max_x, max_y = map(float, extent[1].split())
return min_x, min_y, max_x, max_y
# ZoomOut from linestring extent
def apply_scale_factor(extent, scale_factor=1.125):
min_x, min_y, max_x, max_y = extent
center_x = (min_x + max_x) / 2
center_y = (min_y + max_y) / 2
width = max_x - min_x
height = max_y - min_y
new_width = width * scale_factor
new_height = height * scale_factor
scaled_extent = (
round(center_x - new_width / 2),
round(center_y - new_height / 2),
round(center_x + new_width / 2),
round(center_y + new_height / 2),
)
return scaled_extent
def adjust_bbox_to_fixed_size(scaled_extent, fixed_width, fixed_height):
min_x, min_y, max_x, max_y = scaled_extent
bbox_width = float(max_x - min_x)
bbox_height = float(max_y - min_y)
bbox_aspect_ratio = float(bbox_width / bbox_height)
image_aspect_ratio = float(fixed_width / fixed_height)
if bbox_aspect_ratio > image_aspect_ratio:
# Adjust height to match aspect ratio
new_bbox_height = bbox_width / image_aspect_ratio
height_diff = new_bbox_height - bbox_height
min_y -= height_diff / 2
max_y += height_diff / 2
else:
# Adjust width to match aspect ratio
new_bbox_width = bbox_height * image_aspect_ratio
width_diff = new_bbox_width - bbox_width
min_x -= width_diff / 2
max_x += width_diff / 2
adjusted_extent = (min_x, min_y, max_x, max_y)
return adjusted_extent
if not log_extent:
plpy.warning('Failed to get sql qgis_bbox_py log_id [{}], extent [{}]'.format(log_id, log_extent))
#plpy.notice('qgis_bbox_py log_id [{}], extent [{}]'.format(log_id, log_extent))
# Parse extent and apply ZoomOut scale factor
if scaleout:
scaled_extent = apply_scale_factor(parse_extent_from_db(log_extent))
else:
scaled_extent = parse_extent_from_db(log_extent)
#plpy.notice('qgis_bbox_py log_id [{}], scaled_extent [{}]'.format(log_id, scaled_extent))
fixed_width = width # default 1080
fixed_height = height # default 566
adjusted_extent = adjust_bbox_to_fixed_size(scaled_extent, fixed_width, fixed_height)
#plpy.notice('qgis_bbox_py log_id [{}], adjusted_extent [{}]'.format(log_id, adjusted_extent))
min_x, min_y, max_x, max_y = adjusted_extent
return f"{min_x},{min_y},{max_x},{max_y}"
$qgis_bbox_py$ LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.qgis_bbox_py_fn
IS 'Generate the BBOX base on log extent and adapt extent to the image size for QGIS Server';
-- qgis_role user and role with login, read-only on auth.accounts, limit 20 connections
CREATE ROLE qgis_role WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 20 LOGIN PASSWORD 'mysecretpassword';
COMMENT ON ROLE qgis_role IS
'Role use by QGIS server and Apache to connect and lookup the logbook table.';
-- Allow read on VIEWS on API schema
GRANT USAGE ON SCHEMA api TO qgis_role;
GRANT SELECT ON TABLE api.logbook TO qgis_role;
GRANT USAGE ON SCHEMA public TO qgis_role;
-- For all postgis fn, st_extent, st_transform
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO qgis_role;
-- Allow qgis_role to select all logbook records
CREATE POLICY logbook_qgis_role ON api.logbook TO qgis_role
USING (true)
WITH CHECK (false);
-- Add support for HTML email with image inline for logbook
-- Add support for video link for maplapse
DROP FUNCTION IF EXISTS public.send_email_py_fn;
CREATE OR REPLACE FUNCTION public.send_email_py_fn(IN email_type TEXT, IN _user JSONB, IN app JSONB) RETURNS void
AS $send_email_py$
# Import smtplib for the actual sending function
import smtplib
import requests
# Import the email modules we need
from email.message import EmailMessage
from email.utils import formatdate,make_msgid
from email.mime.text import MIMEText
# Use the shared cache to avoid preparing the email metadata
if email_type in SD:
plan = SD[email_type]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT * FROM email_templates WHERE name = $1", ["text"])
SD[email_type] = plan
# Execute the statement with the email_type param and limit to 1 result
rv = plpy.execute(plan, [email_type], 1)
email_subject = rv[0]['email_subject']
email_content = rv[0]['email_content']
# Replace fields using input jsonb obj
if not _user or not app:
plpy.notice('send_email_py_fn Parameters [{}] [{}]'.format(_user, app))
plpy.error('Error missing parameters')
return None
if 'logbook_name' in _user and _user['logbook_name']:
email_content = email_content.replace('__LOGBOOK_NAME__', _user['logbook_name'])
if 'logbook_link' in _user and _user['logbook_link']:
email_content = email_content.replace('__LOGBOOK_LINK__', str(_user['logbook_link']))
if 'logbook_img' in _user and _user['logbook_img']:
email_content = email_content.replace('__LOGBOOK_IMG__', str(_user['logbook_img']))
if 'video_link' in _user and _user['video_link']:
email_content = email_content.replace('__VIDEO_LINK__', str( _user['video_link']))
if 'recipient' in _user and _user['recipient']:
email_content = email_content.replace('__RECIPIENT__', _user['recipient'])
if 'boat' in _user and _user['boat']:
email_content = email_content.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
email_content = email_content.replace('__BADGE_NAME__', _user['badge'])
if 'otp_code' in _user and _user['otp_code']:
email_content = email_content.replace('__OTP_CODE__', _user['otp_code'])
if 'reset_qs' in _user and _user['reset_qs']:
email_content = email_content.replace('__RESET_QS__', _user['reset_qs'])
if 'alert' in _user and _user['alert']:
email_content = email_content.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
email_content = email_content.replace('__APP_URL__', app['app.url'])
email_from = 'root@localhost'
if 'app.email_from' in app and app['app.email_from']:
email_from = 'PostgSail <' + app['app.email_from'] + '>'
#plpy.notice('Sending email from [{}] [{}]'.format(email_from, app['app.email_from']))
email_to = 'root@localhost'
if 'email' in _user and _user['email']:
email_to = _user['email']
#plpy.notice('Sending email to [{}] [{}]'.format(email_to, _user['email']))
else:
plpy.error('Error email to')
return None
if email_type == 'logbook':
msg = EmailMessage()
msg.set_content(email_content)
else:
msg = MIMEText(email_content, 'plain', 'utf-8')
msg["Subject"] = email_subject
msg["From"] = email_from
msg["To"] = email_to
msg["Date"] = formatdate()
msg["Message-ID"] = make_msgid()
if email_type == 'logbook' and 'logbook_img' in _user and _user['logbook_img']:
# Create a Content-ID for the image
image_cid = make_msgid()
# Set an alternative html body
msg.add_alternative("""\
<html>
<body>
<p>{email_content}</p>
<img src="cid:{image_cid}">
</body>
</html>
""".format(email_content=email_content, image_cid=image_cid[1:-1]), subtype='html')
img_url = 'https://gis.openplotter.cloud/{}'.format(str(_user['logbook_img']))
response = requests.get(img_url, stream=True)
if response.status_code == 200:
msg.get_payload()[1].add_related(response.raw.data,
maintype='image',
subtype='png',
cid=image_cid)
server_smtp = 'localhost'
if 'app.email_server' in app and app['app.email_server']:
server_smtp = app['app.email_server']
#plpy.notice('Sending server [{}] [{}]'.format(server_smtp, app['app.email_server']))
# Send the message via our own SMTP server.
try:
# send your message with credentials specified above
with smtplib.SMTP(server_smtp, 587) as server:
if 'app.email_user' in app and app['app.email_user'] \
and 'app.email_pass' in app and app['app.email_pass']:
server.starttls()
server.login(app['app.email_user'], app['app.email_pass'])
#server.send_message(msg)
server.sendmail(msg["From"], msg["To"], msg.as_string())
server.quit()
# tell the script to report if your message was sent or which errors need to be fixed
plpy.notice('Sent email successfully to [{}] [{}]'.format(msg["To"], msg["Subject"]))
return None
except OSError as error:
plpy.error('OS Error occurred: ' + str(error))
except smtplib.SMTPConnectError:
plpy.error('Failed to connect to the server. Bad connection settings?')
except smtplib.SMTPServerDisconnected:
plpy.error('Failed to connect to the server. Wrong user/password?')
except smtplib.SMTPException as e:
plpy.error('SMTP error occurred: ' + str(e))
$send_email_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.send_email_py_fn
IS 'Send email notification using plpython3u';
-- Add vessel_id key, expose vessel_id
DROP FUNCTION IF EXISTS api.vessel_fn;
CREATE OR REPLACE FUNCTION api.vessel_fn(OUT vessel JSON) RETURNS JSON
AS $vessel$
DECLARE
BEGIN
SELECT
jsonb_build_object(
'name', coalesce(m.name, null),
'mmsi', coalesce(m.mmsi, null),
'vessel_id', m.vessel_id,
'created_at', v.created_at,
'first_contact', coalesce(m.created_at, null),
'last_contact', coalesce(m.time, null),
'geojson', coalesce(ST_AsGeoJSON(geojson_t.*)::json, null)
)::jsonb || api.vessel_details_fn()::jsonb
INTO vessel
FROM auth.vessels v, api.metadata m,
( select
current_setting('vessel.name') as name,
time,
courseovergroundtrue,
speedoverground,
anglespeedapparent,
longitude,latitude,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics
WHERE
latitude IS NOT NULL
AND longitude IS NOT NULL
AND vessel_id = current_setting('vessel.id', false)
ORDER BY time DESC LIMIT 1
) AS geojson_t
WHERE
m.vessel_id = current_setting('vessel.id')
AND m.vessel_id = v.vessel_id;
--RAISE notice 'api.vessel_fn %', obj;
END;
$vessel$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
api.vessel_fn
IS 'Expose vessel details to API';
-- Update pending new logbook from process queue
DROP FUNCTION IF EXISTS public.process_post_logbook_fn;
CREATE OR REPLACE FUNCTION public.process_post_logbook_fn(IN _id integer) RETURNS void AS $process_post_logbook_queue$
DECLARE
logbook_rec record;
log_settings jsonb;
user_settings jsonb;
extra_json jsonb;
log_img_url text;
logs_img_url text;
extent_bbox text;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> process_post_logbook_fn invalid input %', _id;
RETURN;
END IF;
-- Get the logbook record with all necessary fields exist
SELECT * INTO logbook_rec
FROM api.logbook
WHERE active IS false
AND id = _id
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> process_post_logbook_fn invalid logbook %', _id;
RETURN;
END IF;
PERFORM set_config('vessel.id', logbook_rec.vessel_id, false);
--RAISE WARNING 'public.process_post_logbook_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Generate logbook image map name from QGIS
SELECT CONCAT('log_', logbook_rec.vessel_id::TEXT, '_', logbook_rec.id, '.png') INTO log_img_url;
SELECT ST_Extent(ST_Transform(logbook_rec.track_geom, 3857))::TEXT AS envelope INTO extent_bbox FROM api.logbook WHERE id = logbook_rec.id;
PERFORM public.qgis_getmap_py_fn(logbook_rec.vessel_id::TEXT, logbook_rec.id, extent_bbox::TEXT, False);
-- Generate logs image map name from QGIS
WITH merged AS (
SELECT ST_Union(logbook_rec.track_geom) AS merged_geometry
FROM api.logbook WHERE vessel_id = logbook_rec.vessel_id
)
SELECT ST_Extent(ST_Transform(merged_geometry, 3857))::TEXT AS envelope INTO extent_bbox FROM merged;
SELECT CONCAT('logs_', logbook_rec.vessel_id::TEXT, '_', logbook_rec.id, '.png') INTO logs_img_url;
PERFORM public.qgis_getmap_py_fn(logbook_rec.vessel_id::TEXT, logbook_rec.id, extent_bbox::TEXT, True);
-- Prepare notification, gather user settings
SELECT json_build_object('logbook_name', logbook_rec.name, 'logbook_link', logbook_rec.id, 'logbook_img', log_img_url) INTO log_settings;
user_settings := get_user_settings_from_vesselid_fn(logbook_rec.vessel_id::TEXT);
SELECT user_settings::JSONB || log_settings::JSONB into user_settings;
RAISE NOTICE '-> debug process_post_logbook_fn get_user_settings_from_vesselid_fn [%]', user_settings;
RAISE NOTICE '-> debug process_post_logbook_fn log_settings [%]', log_settings;
-- Send notification
PERFORM send_notification_fn('logbook'::TEXT, user_settings::JSONB);
-- Process badges
RAISE NOTICE '-> debug process_post_logbook_fn user_settings [%]', user_settings->>'email'::TEXT;
PERFORM set_config('user.email', user_settings->>'email'::TEXT, false);
PERFORM badges_logbook_fn(logbook_rec.id, logbook_rec._to_time::TEXT);
PERFORM badges_geom_fn(logbook_rec.id, logbook_rec._to_time::TEXT);
END;
$process_post_logbook_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_post_logbook_fn
IS 'Generate QGIS image and Notify user for new logbook.';
-- Check for new logbook pending notification
DROP FUNCTION IF EXISTS public.cron_process_post_logbook_fn;
CREATE FUNCTION public.cron_process_post_logbook_fn() RETURNS void AS $$
DECLARE
process_rec record;
BEGIN
-- Check for new logbook pending update
RAISE NOTICE 'cron_process_post_logbook_fn init loop';
FOR process_rec in
SELECT * FROM process_queue
WHERE channel = 'post_logbook' AND processed IS NULL
ORDER BY stored ASC LIMIT 100
LOOP
RAISE NOTICE 'cron_process_post_logbook_fn processing queue [%] for logbook id [%]', process_rec.id, process_rec.payload;
-- update logbook
PERFORM process_post_logbook_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE 'cron_process_post_logbook_fn processed queue [%] for logbook id [%]', process_rec.id, process_rec.payload;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_post_logbook_fn
IS 'init by pg_cron to check for new logbook pending qgis and notification, after process_new_logbook_fn';
DROP FUNCTION IF EXISTS public.run_cron_jobs;
CREATE FUNCTION public.run_cron_jobs() RETURNS void AS $$
BEGIN
-- In correct order
perform public.cron_process_new_notification_fn();
perform public.cron_process_monitor_online_fn();
--perform public.cron_process_grafana_fn();
perform public.cron_process_pre_logbook_fn();
perform public.cron_process_new_logbook_fn();
perform public.cron_process_post_logbook_fn();
perform public.cron_process_new_stay_fn();
--perform public.cron_process_new_moorage_fn();
perform public.cron_process_monitor_offline_fn();
END
$$ language plpgsql;
DROP VIEW IF EXISTS api.eventlogs_view;
CREATE VIEW api.eventlogs_view WITH (security_invoker=true,security_barrier=true) AS
SELECT pq.*
FROM public.process_queue pq
WHERE channel <> 'pre_logbook'
AND channel <> 'post_logbook'
AND (ref_id = current_setting('user.id', true)
OR ref_id = current_setting('vessel.id', true))
ORDER BY id ASC;
-- Description
COMMENT ON VIEW
api.eventlogs_view
IS 'Event logs view';
-- CRON for new video notification
DROP FUNCTION IF EXISTS public.cron_process_new_video_fn;
CREATE FUNCTION public.cron_process_new_video_fn() RETURNS void AS $$
declare
process_rec record;
metadata_rec record;
video_settings jsonb;
user_settings jsonb;
begin
-- Check for new event notification pending update
RAISE NOTICE 'cron_process_new_video_fn';
FOR process_rec in
SELECT * FROM process_queue
WHERE channel = 'new_video'
AND processed IS NULL
ORDER BY stored ASC
LOOP
RAISE NOTICE '-> cron_process_new_video_fn for [%]', process_rec.payload;
SELECT * INTO metadata_rec
FROM api.metadata
WHERE vessel_id = process_rec.ref_id::TEXT;
IF metadata_rec.vessel_id IS NULL OR metadata_rec.vessel_id = '' THEN
RAISE WARNING '-> cron_process_new_video_fn invalid metadata record vessel_id %', vessel_id;
RAISE EXCEPTION 'Invalid metadata'
USING HINT = 'Unknown vessel_id';
RETURN;
END IF;
PERFORM set_config('vessel.id', metadata_rec.vessel_id, false);
RAISE DEBUG '-> DEBUG cron_process_new_video_fn vessel_id %', current_setting('vessel.id', false);
-- Prepare notification, gather user settings
SELECT json_build_object('video_link', CONCAT('https://videos.openplotter.cloud/', process_rec.payload)) into video_settings;
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(metadata_rec.vessel_id::TEXT);
SELECT user_settings::JSONB || video_settings::JSONB into user_settings;
RAISE DEBUG '-> DEBUG cron_process_new_video_fn get_user_settings_from_vesselid_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('video_ready'::TEXT, user_settings::JSONB);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_new_video_fn updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_video_fn
IS 'init by pg_cron to check for new video event pending notifications, if so perform process_notification_queue_fn';
-- Add support for video link for maplapse
DROP FUNCTION IF EXISTS public.send_pushover_py_fn;
CREATE OR REPLACE FUNCTION public.send_pushover_py_fn(IN message_type TEXT, IN _user JSONB, IN app JSONB) RETURNS void
AS $send_pushover_py$
"""
https://pushover.net/api#messages
Send a notification to a pushover user
"""
import requests
# Use the shared cache to avoid preparing the email metadata
if message_type in SD:
plan = SD[message_type]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT * FROM email_templates WHERE name = $1", ["text"])
SD[message_type] = plan
# Execute the statement with the message_type param and limit to 1 result
rv = plpy.execute(plan, [message_type], 1)
pushover_title = rv[0]['pushover_title']
pushover_message = rv[0]['pushover_message']
# Replace fields using input jsonb obj
if 'logbook_name' in _user and _user['logbook_name']:
pushover_message = pushover_message.replace('__LOGBOOK_NAME__', _user['logbook_name'])
if 'logbook_link' in _user and _user['logbook_link']:
pushover_message = pushover_message.replace('__LOGBOOK_LINK__', str(_user['logbook_link']))
if 'video_link' in _user and _user['video_link']:
pushover_message = pushover_message.replace('__VIDEO_LINK__', str( _user['video_link']))
if 'recipient' in _user and _user['recipient']:
pushover_message = pushover_message.replace('__RECIPIENT__', _user['recipient'])
if 'boat' in _user and _user['boat']:
pushover_message = pushover_message.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
pushover_message = pushover_message.replace('__BADGE_NAME__', _user['badge'])
if 'alert' in _user and _user['alert']:
pushover_message = pushover_message.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
pushover_message = pushover_message.replace('__APP_URL__', app['app.url'])
pushover_token = None
if 'app.pushover_app_token' in app and app['app.pushover_app_token']:
pushover_token = app['app.pushover_app_token']
else:
plpy.error('Error no pushover token defined, check app settings')
return None
pushover_user = None
if 'pushover_user_key' in _user and _user['pushover_user_key']:
pushover_user = _user['pushover_user_key']
else:
plpy.error('Error no pushover user token defined, check user settings')
return None
if message_type == 'logbook' and 'logbook_img' in _user and _user['logbook_img']:
# Send notification with gis image logbook as attachment
img_url = 'https://gis.openplotter.cloud/{}'.format(str(_user['logbook_img']))
response = requests.get(img_url, stream=True)
if response.status_code == 200:
r = requests.post("https://api.pushover.net/1/messages.json", data = {
"token": pushover_token,
"user": pushover_user,
"title": pushover_title,
"message": pushover_message
}, files = {
"attachment": (str(_user['logbook_img']), response.raw.data, "image/png")
})
else:
r = requests.post("https://api.pushover.net/1/messages.json", data = {
"token": pushover_token,
"user": pushover_user,
"title": pushover_title,
"message": pushover_message
})
#print(r.text)
# Return ?? or None if not found
#plpy.notice('Sent pushover successfully to [{}] [{}]'.format(r.text, r.status_code))
if r.status_code == 200:
plpy.notice('Sent pushover successfully to [{}] [{}] [{}]'.format(pushover_user, pushover_title, r.text))
else:
plpy.error('Failed to send pushover')
return None
$send_pushover_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.send_pushover_py_fn
IS 'Send pushover notification using plpython3u';
-- Add support for video link for maplapse
DROP FUNCTION IF EXISTS public.send_telegram_py_fn;
CREATE OR REPLACE FUNCTION public.send_telegram_py_fn(IN message_type TEXT, IN _user JSONB, IN app JSONB) RETURNS void
AS $send_telegram_py$
"""
https://core.telegram.org/bots/api#sendmessage
Send a message to a telegram user or group specified on chatId
chat_id must be a number!
"""
import requests
import json
# Use the shared cache to avoid preparing the email metadata
if message_type in SD:
plan = SD[message_type]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT * FROM email_templates WHERE name = $1", ["text"])
SD[message_type] = plan
# Execute the statement with the message_type param and limit to 1 result
rv = plpy.execute(plan, [message_type], 1)
telegram_title = rv[0]['pushover_title']
telegram_message = rv[0]['pushover_message']
# Replace fields using input jsonb obj
if 'logbook_name' in _user and _user['logbook_name']:
telegram_message = telegram_message.replace('__LOGBOOK_NAME__', _user['logbook_name'])
if 'logbook_link' in _user and _user['logbook_link']:
telegram_message = telegram_message.replace('__LOGBOOK_LINK__', str(_user['logbook_link']))
if 'video_link' in _user and _user['video_link']:
telegram_message = telegram_message.replace('__VIDEO_LINK__', str( _user['video_link']))
if 'recipient' in _user and _user['recipient']:
telegram_message = telegram_message.replace('__RECIPIENT__', _user['recipient'])
if 'boat' in _user and _user['boat']:
telegram_message = telegram_message.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
telegram_message = telegram_message.replace('__BADGE_NAME__', _user['badge'])
if 'alert' in _user and _user['alert']:
telegram_message = telegram_message.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
telegram_message = telegram_message.replace('__APP_URL__', app['app.url'])
telegram_token = None
if 'app.telegram_bot_token' in app and app['app.telegram_bot_token']:
telegram_token = app['app.telegram_bot_token']
else:
plpy.error('Error no telegram token defined, check app settings')
return None
telegram_chat_id = None
if 'telegram_chat_id' in _user and _user['telegram_chat_id']:
telegram_chat_id = _user['telegram_chat_id']
else:
plpy.error('Error no telegram user token defined, check user settings')
return None
# sendMessage via requests
headers = {'Content-Type': 'application/json',
'Proxy-Authorization': 'Basic base64'}
data_dict = {'chat_id': telegram_chat_id,
'text': telegram_message,
'parse_mode': 'HTML',
'disable_notification': False}
data = json.dumps(data_dict)
url = f'https://api.telegram.org/bot{telegram_token}/sendMessage'
r = requests.post(url, data=data, headers=headers)
if message_type == 'logbook' and 'logbook_img' in _user and _user['logbook_img']:
# Send gis image logbook
# https://core.telegram.org/bots/api#sendphoto
data_dict['photo'] = 'https://gis.openplotter.cloud/{}'.format(str(_user['logbook_img']))
del data_dict['text']
data = json.dumps(data_dict)
url = f'https://api.telegram.org/bot{telegram_token}/sendPhoto'
r = requests.post(url, data=data, headers=headers)
#print(r.text)
# Return something boolean?
#plpy.notice('Sent telegram successfully to [{}] [{}]'.format(r.text, r.status_code))
if r.status_code == 200:
plpy.notice('Sent telegram successfully to [{}] [{}] [{}]'.format(telegram_chat_id, telegram_title, r.text))
else:
plpy.error('Failed to send telegram')
return None
$send_telegram_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.send_telegram_py_fn
IS 'Send a message to a telegram user or group specified on chatId using plpython3u';
-- Add maplapse video record in queue
DROP FUNCTION IF EXISTS api.maplapse_record_fn;
CREATE OR REPLACE FUNCTION api.maplapse_record_fn(IN maplapse TEXT) RETURNS BOOLEAN
AS $maplapse_record$
BEGIN
-- payload: 'Bromera,?start_log=8430&end_log=8491&height=100vh'
IF maplapse ~ '^(\w+)\,\?(start_log=\d+).*$' then
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('maplapse_video', maplapse, NOW(), current_setting('vessel.id', true));
RETURN True;
ELSE
RETURN False;
END IF;
END;
$maplapse_record$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
api.maplapse_record_fn
IS 'Add maplapse video record in queue';
CREATE ROLE maplapse_role WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 10 LOGIN PASSWORD 'mysecretpassword';
COMMENT ON ROLE maplapse_role IS
'Role use by maplapse external cronjob to connect and lookup the process_queue table.';
GRANT USAGE ON SCHEMA public TO maplapse_role;
GRANT SELECT,UPDATE,INSERT ON TABLE public.process_queue TO maplapse_role;
GRANT USAGE, SELECT ON SEQUENCE public.process_queue_id_seq TO maplapse_role;
-- Allow maplapse_role to select,update,insert on tbl process_queue
CREATE POLICY public_maplapse_role ON public.process_queue TO maplapse_role
USING (true)
WITH CHECK (true);
-- Allow to execute fn for user_role and grafana
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO grafana;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO grafana;
GRANT SELECT ON TABLE api.eventlogs_view TO user_role;
-- Update grafana role SQl connection to 30
ALTER ROLE grafana WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 30 LOGIN;
-- Alter moorages table with default duration of 0.
ALTER TABLE api.moorages ALTER COLUMN stay_duration SET DEFAULT 'PT0S';
-- Update moorage_view to default with duration of 0
DROP VIEW IF EXISTS api.moorage_view;
CREATE OR REPLACE VIEW api.moorage_view WITH (security_invoker=true,security_barrier=true) AS -- TODO
SELECT id,
m.name AS Name,
sa.description AS Default_Stay,
sa.stay_code AS Default_Stay_Id,
m.home_flag AS Home,
EXTRACT(DAY FROM justify_hours ( COALESCE(m.stay_duration, 'PT0S') )) AS Total_Stay,
COALESCE(m.stay_duration, 'PT0S') AS Total_Duration,
m.reference_count AS Arrivals_Departures,
m.notes
-- m.geog
FROM api.moorages m, api.stays_at sa
-- m.stay_duration is only process on a stay
-- default with duration of 0sec
WHERE geog IS NOT NULL
AND m.stay_code = sa.stay_code;
-- Description
COMMENT ON VIEW
api.moorage_view
IS 'Moorage details web view';
-- Update version
UPDATE public.app_settings
SET value='0.7.4'
WHERE "name"='app.version';
\c postgres
-- Create a every 7 minutes for cron_process_post_logbook_fn
SELECT cron.schedule('cron_post_logbook', '*/7 * * * *', 'select public.cron_process_post_logbook_fn()');
UPDATE cron.job SET database = 'signalk' where jobname = 'cron_post_logbook';
-- Create a every 15 minutes for cron_process_post_logbook_fn
SELECT cron.schedule('cron_new_video', '*/15 * * * *', 'select public.cron_process_new_video_fn()');
UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_video';

View File

@@ -0,0 +1,755 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration July 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Add video error notification message
INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
VALUES ('video_error','PostgSail Video Error',E'Hey,\nSorry we could not generate your video.\nPlease reach out to debug and solve the issue.','PostgSail Video Error!',E'There has been an error with your video.');
-- CRON for new video notification
DROP FUNCTION IF EXISTS public.cron_process_new_video_fn;
CREATE FUNCTION public.cron_process_video_fn() RETURNS void AS $cron_process_video$
DECLARE
process_rec record;
metadata_rec record;
video_settings jsonb;
user_settings jsonb;
BEGIN
-- Check for new event notification pending update
RAISE NOTICE 'cron_process_video_fn';
FOR process_rec in
SELECT * FROM process_queue
WHERE (channel = 'new_video' OR channel = 'error_video')
AND processed IS NULL
ORDER BY stored ASC
LOOP
RAISE NOTICE '-> cron_process_video_fn for [%]', process_rec.payload;
SELECT * INTO metadata_rec
FROM api.metadata
WHERE vessel_id = process_rec.ref_id::TEXT;
IF metadata_rec.vessel_id IS NULL OR metadata_rec.vessel_id = '' THEN
RAISE WARNING '-> cron_process_video_fn invalid metadata record vessel_id %', vessel_id;
RAISE EXCEPTION 'Invalid metadata'
USING HINT = 'Unknown vessel_id';
RETURN;
END IF;
PERFORM set_config('vessel.id', metadata_rec.vessel_id, false);
RAISE DEBUG '-> DEBUG cron_process_video_fn vessel_id %', current_setting('vessel.id', false);
-- Prepare notification, gather user settings
SELECT json_build_object('video_link', CONCAT('https://videos.openplotter.cloud/', process_rec.payload)) into video_settings;
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(metadata_rec.vessel_id::TEXT);
SELECT user_settings::JSONB || video_settings::JSONB into user_settings;
RAISE DEBUG '-> DEBUG cron_process_video_fn get_user_settings_from_vesselid_fn [%]', user_settings;
-- Send notification
IF process_rec.channel = 'new_video' THEN
PERFORM send_notification_fn('video_ready'::TEXT, user_settings::JSONB);
ELSE
PERFORM send_notification_fn('video_error'::TEXT, user_settings::JSONB);
END IF;
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_video_fn updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$cron_process_video$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_video_fn
IS 'init by pg_cron to check for new video event pending notifications, if so perform process_notification_queue_fn';
-- Fix error when stateOfCharge is null. Make stateOfCharge null value assume to be charge 1.
DROP FUNCTION IF EXISTS public.cron_alerts_fn();
CREATE OR REPLACE FUNCTION public.cron_alerts_fn() RETURNS void AS $cron_alerts$
DECLARE
alert_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ;
metric_rec record;
app_settings JSONB;
user_settings JSONB;
alerting JSONB;
_alarms JSONB;
alarms TEXT;
alert_default JSONB := '{
"low_pressure_threshold": 990,
"high_wind_speed_threshold": 30,
"low_water_depth_threshold": 1,
"min_notification_interval": 6,
"high_pressure_drop_threshold": 12,
"low_battery_charge_threshold": 90,
"low_battery_voltage_threshold": 12.5,
"low_water_temperature_threshold": 10,
"low_indoor_temperature_threshold": 7,
"low_outdoor_temperature_threshold": 3
}';
BEGIN
-- Check for new event notification pending update
RAISE NOTICE 'cron_alerts_fn';
FOR alert_rec in
SELECT
a.user_id,a.email,v.vessel_id,
COALESCE((a.preferences->'alert_last_metric')::TEXT, default_last_metric::TEXT) as last_metric,
(alert_default || (a.preferences->'alerting')::JSONB) as alerting,
(a.preferences->'alarms')::JSONB as alarms
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'alerting'->'enabled')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_alerts_fn for [%]', alert_rec;
PERFORM set_config('vessel.id', alert_rec.vessel_id, false);
PERFORM set_config('user.email', alert_rec.email, false);
--RAISE WARNING 'public.cron_process_alert_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(alert_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_alerts_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last last_metric avg by 5 minutes
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg((m.metrics->'environment.inside.temperature')::numeric) AS intemp,
avg((m.metrics->'environment.outside.temperature')::numeric) AS outtemp,
avg((m.metrics->'environment.water.temperature')::numeric) AS wattemp,
avg((m.metrics->'environment.depth.belowTransducer')::numeric) AS watdepth,
avg((m.metrics->'environment.outside.pressure')::numeric) AS pressure,
avg((m.metrics->'environment.wind.speedTrue')::numeric) AS wind,
avg((m.metrics->'electrical.batteries.House.voltage')::numeric) AS voltage,
avg(coalesce((m.metrics->>'electrical.batteries.House.capacity.stateOfCharge')::numeric, 1)) AS charge
FROM api.metrics m
WHERE vessel_id = alert_rec.vessel_id
AND m.time >= alert_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_alerts_fn checking metrics [%]', metric_rec;
RAISE NOTICE '-> cron_alerts_fn checking alerting [%]', alert_rec.alerting;
--RAISE NOTICE '-> cron_alerts_fn checking debug [%] [%]', kelvinToCel(metric_rec.intemp), (alert_rec.alerting->'low_indoor_temperature_threshold');
IF kelvinToCel(metric_rec.intemp) < (alert_rec.alerting->'low_indoor_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_indoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_indoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.intemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.intemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold';
END IF;
IF kelvinToCel(metric_rec.outtemp) < (alert_rec.alerting->'low_outdoor_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_outdoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_outdoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.outtemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.outtemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold';
END IF;
IF kelvinToCel(metric_rec.wattemp) < (alert_rec.alerting->'low_water_temperature_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.wattemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_temperature_threshold value:'|| kelvinToCel(metric_rec.wattemp) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold';
END IF;
IF metric_rec.watdepth < (alert_rec.alerting->'low_water_depth_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_depth_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_depth_threshold": {"value": '|| metric_rec.watdepth ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_depth_threshold value:'|| metric_rec.watdepth ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold';
END IF;
if metric_rec.pressure < (alert_rec.alerting->'high_pressure_drop_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_pressure_drop_threshold'->>'date') IS NULL) OR
(((_alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_pressure_drop_threshold": {"value": '|| metric_rec.pressure ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_pressure_drop_threshold value:'|| metric_rec.pressure ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold';
END IF;
IF metric_rec.wind > (alert_rec.alerting->'high_wind_speed_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_wind_speed_threshold'->>'date') IS NULL) OR
(((_alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_wind_speed_threshold": {"value": '|| metric_rec.wind ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_wind_speed_threshold value:'|| metric_rec.wind ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold';
END IF;
if metric_rec.voltage < (alert_rec.alerting->'low_battery_voltage_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = 'lacroix.francois@gmail.com';
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_voltage_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_voltage_threshold": {"value": '|| metric_rec.voltage ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_voltage_threshold value:'|| metric_rec.voltage ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold';
END IF;
if (metric_rec.charge*100) < (alert_rec.alerting->'low_battery_charge_threshold')::numeric then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_charge_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_charge_threshold": {"value": '|| (metric_rec.charge*100) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_charge_threshold value:'|| (metric_rec.charge*100) ||'"}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold';
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{alert_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$cron_alerts$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_alerts_fn
IS 'init by pg_cron to check for alerts';
-- Fix error: None of these media types are available: text/xml
DROP FUNCTION IF EXISTS api.export_logbooks_gpx_fn;
CREATE OR REPLACE FUNCTION api.export_logbooks_gpx_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL) RETURNS "text/xml"
AS $export_logbooks_gpx$
declare
merged_jsonb jsonb;
app_settings jsonb;
BEGIN
-- Merge GIS track_geom of geometry type Point into a jsonb array format
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(
jsonb_build_object('coordinates', f->'geometry'->'coordinates', 'time', f->'properties'->>'time')
) INTO merged_jsonb
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE id >= start_log
AND id <= end_log
AND track_geojson IS NOT NULL
ORDER BY _from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
ELSE
SELECT jsonb_agg(
jsonb_build_object('coordinates', f->'geometry'->'coordinates', 'time', f->'properties'->>'time')
) INTO merged_jsonb
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE track_geojson IS NOT NULL
ORDER BY _from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
END IF;
--RAISE WARNING '-> export_logbooks_gpx_fn _jsonb %' , _jsonb;
-- Gather url from app settings
app_settings := get_app_url_fn();
--RAISE WARNING '-> export_logbooks_gpx_fn app_settings %', app_settings;
-- Generate GPX XML, extract Point features from geojson.
RETURN xmlelement(name gpx,
xmlattributes( '1.1' as version,
'PostgSAIL' as creator,
'http://www.topografix.com/GPX/1/1' as xmlns,
'http://www.opencpn.org' as "xmlns:opencpn",
app_settings->>'app.url' as "xmlns:postgsail"),
xmlelement(name metadata,
xmlelement(name link, xmlattributes(app_settings->>'app.url' as href),
xmlelement(name text, 'PostgSail'))),
xmlelement(name trk,
xmlelement(name name, 'logbook name'),
xmlelement(name trkseg, xmlagg(
xmlelement(name trkpt,
xmlattributes(features->'coordinates'->1 as lat, features->'coordinates'->0 as lon),
xmlelement(name time, features->'properties'->>'time')
)))))::pg_catalog.xml
FROM jsonb_array_elements(merged_jsonb) AS features;
END;
$export_logbooks_gpx$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.export_logbooks_gpx_fn
IS 'Export a logs entries to GPX XML format';
-- Add export logbooks as png
DROP FUNCTION IF EXISTS public.qgis_bbox_trip_py_fn;
CREATE OR REPLACE FUNCTION public.qgis_bbox_trip_py_fn(IN _str_to_parse TEXT DEFAULT NULL, OUT bbox TEXT)
AS $qgis_bbox_trip_py$
plpy.notice('qgis_bbox_trip_py_fn _str_to_parse [{}]'.format(_str_to_parse))
vessel_id, log_id, log_end = _str_to_parse.split('_')
width = 1080
height = 566
scaleout = True
log_extent = None
# If we have a vessel_id then it is full logs image map
if vessel_id and log_end is None:
# Use the shared cache to avoid preparing the log extent
if vessel_id in SD:
plan = SD[vessel_id]
# A prepared statement from Python
else:
plan = plpy.prepare("WITH merged AS ( SELECT ST_Union(track_geom) AS merged_geometry FROM api.logbook WHERE vessel_id = $1 ) SELECT ST_Extent(ST_Transform(merged_geometry, 3857))::TEXT FROM merged;", ["text"])
SD[vessel_id] = plan
# Execute the statement with the log extent param and limit to 1 result
rv = plpy.execute(plan, [vessel_id], 1)
log_extent = rv[0]['st_extent']
# If we have a vessel_id and a log_end then it is subset logs image map
elif vessel_id and log_end:
# Use the shared cache to avoid preparing the log extent
shared_cache = vessel_id + str(log_id) + str(log_end)
if shared_cache in SD:
plan = SD[shared_cache]
# A prepared statement from Python
else:
plan = plpy.prepare("WITH merged AS ( SELECT ST_Union(track_geom) AS merged_geometry FROM api.logbook WHERE vessel_id = $1 and id >= $2::NUMERIC and id <= $3::NUMERIC) SELECT ST_Extent(ST_Transform(merged_geometry, 3857))::TEXT FROM merged;", ["text","text","text"])
SD[shared_cache] = plan
# Execute the statement with the log extent param and limit to 1 result
rv = plpy.execute(plan, [vessel_id,log_id,log_end], 1)
log_extent = rv[0]['st_extent']
# Else we have a log_id then it is single log image map
else :
# Use the shared cache to avoid preparing the log extent
if log_id in SD:
plan = SD[log_id]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT ST_Extent(ST_Transform(track_geom, 3857)) FROM api.logbook WHERE id = $1::NUMERIC", ["text"])
SD[log_id] = plan
# Execute the statement with the log extent param and limit to 1 result
rv = plpy.execute(plan, [log_id], 1)
log_extent = rv[0]['st_extent']
# Extract extent
def parse_extent_from_db(extent_raw):
# Parse the extent_raw to extract coordinates
extent = extent_raw.replace('BOX(', '').replace(')', '').split(',')
min_x, min_y = map(float, extent[0].split())
max_x, max_y = map(float, extent[1].split())
return min_x, min_y, max_x, max_y
# ZoomOut from linestring extent
def apply_scale_factor(extent, scale_factor=1.125):
min_x, min_y, max_x, max_y = extent
center_x = (min_x + max_x) / 2
center_y = (min_y + max_y) / 2
width = max_x - min_x
height = max_y - min_y
new_width = width * scale_factor
new_height = height * scale_factor
scaled_extent = (
round(center_x - new_width / 2),
round(center_y - new_height / 2),
round(center_x + new_width / 2),
round(center_y + new_height / 2),
)
return scaled_extent
def adjust_bbox_to_fixed_size(scaled_extent, fixed_width, fixed_height):
min_x, min_y, max_x, max_y = scaled_extent
bbox_width = float(max_x - min_x)
bbox_height = float(max_y - min_y)
bbox_aspect_ratio = float(bbox_width / bbox_height)
image_aspect_ratio = float(fixed_width / fixed_height)
if bbox_aspect_ratio > image_aspect_ratio:
# Adjust height to match aspect ratio
new_bbox_height = bbox_width / image_aspect_ratio
height_diff = new_bbox_height - bbox_height
min_y -= height_diff / 2
max_y += height_diff / 2
else:
# Adjust width to match aspect ratio
new_bbox_width = bbox_height * image_aspect_ratio
width_diff = new_bbox_width - bbox_width
min_x -= width_diff / 2
max_x += width_diff / 2
adjusted_extent = (min_x, min_y, max_x, max_y)
return adjusted_extent
if not log_extent:
plpy.warning('Failed to get sql qgis_bbox_trip_py_fn log_id [{}], extent [{}]'.format(log_id, log_extent))
#plpy.notice('qgis_bbox_trip_py_fn log_id [{}], extent [{}]'.format(log_id, log_extent))
# Parse extent and apply ZoomOut scale factor
if scaleout:
scaled_extent = apply_scale_factor(parse_extent_from_db(log_extent))
else:
scaled_extent = parse_extent_from_db(log_extent)
#plpy.notice('qgis_bbox_trip_py_fn log_id [{}], scaled_extent [{}]'.format(log_id, scaled_extent))
fixed_width = width # default 1080
fixed_height = height # default 566
adjusted_extent = adjust_bbox_to_fixed_size(scaled_extent, fixed_width, fixed_height)
#plpy.notice('qgis_bbox_trip_py_fn log_id [{}], adjusted_extent [{}]'.format(log_id, adjusted_extent))
min_x, min_y, max_x, max_y = adjusted_extent
return f"{min_x},{min_y},{max_x},{max_y}"
$qgis_bbox_trip_py$ LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.qgis_bbox_trip_py_fn
IS 'Generate the BBOX base on trip extent and adapt extent to the image size for QGIS Server';
DROP FUNCTION IF EXISTS public.grafana_py_fn;
-- Update grafana provisioning, ERROR: KeyError: 'secureJsonFields'
CREATE OR REPLACE FUNCTION public.grafana_py_fn(_v_name text, _v_id text, _u_email text, app jsonb)
RETURNS void
TRANSFORM FOR TYPE jsonb
LANGUAGE plpython3u
AS $function$
"""
https://grafana.com/docs/grafana/latest/developers/http_api/
Create organization base on vessel name
Create user base on user email
Add user to organization
Add data_source to organization
Add dashboard to organization
Update organization preferences
"""
import requests
import json
import re
grafana_uri = None
if 'app.grafana_admin_uri' in app and app['app.grafana_admin_uri']:
grafana_uri = app['app.grafana_admin_uri']
else:
plpy.error('Error no grafana_admin_uri defined, check app settings')
return None
b_name = None
if not _v_name:
b_name = _v_id
else:
b_name = _v_name
# add vessel org
headers = {'User-Agent': 'PostgSail', 'From': 'xbgmsharp@gmail.com',
'Accept': 'application/json', 'Content-Type': 'application/json'}
path = 'api/orgs'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data_dict = {'name':b_name}
data = json.dumps(data_dict)
r = requests.post(url, data=data, headers=headers)
#print(r.text)
plpy.notice(r.json())
if r.status_code == 200 and "orgId" in r.json():
org_id = r.json()['orgId']
else:
plpy.error('Error grafana add vessel org {req} - {res}'.format(req=data_dict,res=r.json()))
return none
# add user to vessel org
path = 'api/admin/users'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data_dict = {'orgId':org_id, 'email':_u_email, 'password':'asupersecretpassword'}
data = json.dumps(data_dict)
r = requests.post(url, data=data, headers=headers)
#print(r.text)
plpy.notice(r.json())
if r.status_code == 200 and "id" in r.json():
user_id = r.json()['id']
else:
plpy.error('Error grafana add user to vessel org')
return
# read data_source
path = 'api/datasources/1'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
r = requests.get(url, headers=headers)
#print(r.text)
plpy.notice(r.json())
data_source = r.json()
data_source['id'] = 0
data_source['orgId'] = org_id
data_source['uid'] = "ds_" + _v_id
data_source['name'] = "ds_" + _v_id
data_source['secureJsonData'] = {}
data_source['secureJsonData']['password'] = 'mysecretpassword'
data_source['readOnly'] = True
if "secureJsonFields" in data_source:
del data_source['secureJsonFields']
# add data_source to vessel org
path = 'api/datasources'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data = json.dumps(data_source)
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.post(url, data=data, headers=headers)
plpy.notice(r.json())
del headers['X-Grafana-Org-Id']
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana add data_source to vessel org')
return
dashboards_tpl = [ 'pgsail_tpl_electrical', 'pgsail_tpl_logbook', 'pgsail_tpl_monitor', 'pgsail_tpl_rpi', 'pgsail_tpl_solar', 'pgsail_tpl_weather', 'pgsail_tpl_home']
for dashboard in dashboards_tpl:
# read dashboard template by uid
path = 'api/dashboards/uid'
url = f'{grafana_uri}/{path}/{dashboard}'.format(grafana_uri,path,dashboard)
if 'X-Grafana-Org-Id' in headers:
del headers['X-Grafana-Org-Id']
r = requests.get(url, headers=headers)
plpy.notice(r.json())
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana read dashboard template')
return
new_dashboard = r.json()
del new_dashboard['meta']
new_dashboard['dashboard']['version'] = 0
new_dashboard['dashboard']['id'] = 0
new_uid = re.sub(r'pgsail_tpl_(.*)', r'postgsail_\1', new_dashboard['dashboard']['uid'])
new_dashboard['dashboard']['uid'] = f'{new_uid}_{_v_id}'.format(new_uid,_v_id)
# add dashboard to vessel org
path = 'api/dashboards/db'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
data = json.dumps(new_dashboard)
new_data = data.replace('PCC52D03280B7034C', data_source['uid'])
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.post(url, data=new_data, headers=headers)
plpy.notice(r.json())
if r.status_code != 200 and "id" not in r.json():
plpy.error('Error grafana add dashboard to vessel org')
return
# Update Org Prefs
path = 'api/org/preferences'
url = f'{grafana_uri}/{path}'.format(grafana_uri,path)
home_dashboard = {}
home_dashboard['timezone'] = 'utc'
home_dashboard['homeDashboardUID'] = f'postgsail_home_{_v_id}'.format(_v_id)
data = json.dumps(home_dashboard)
headers['X-Grafana-Org-Id'] = str(org_id)
r = requests.patch(url, data=data, headers=headers)
plpy.notice(r.json())
if r.status_code != 200:
plpy.error('Error grafana update org preferences')
return
plpy.notice('Done')
$function$
;
COMMENT ON FUNCTION public.grafana_py_fn(text, text, text, jsonb) IS 'Grafana Organization,User,data_source,dashboards provisioning via HTTP API using plpython3u';
-- Add missing comment on function cron_process_no_activity_fn
COMMENT ON FUNCTION
public.cron_process_no_activity_fn
IS 'init by pg_cron, check for vessel with no activity for more than 230 days then send notification';
-- Update grafana,qgis,api role SQL connection to 30
ALTER ROLE grafana WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 30 LOGIN;
ALTER ROLE api_anonymous WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 30 LOGIN;
ALTER ROLE qgis_role WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 30 LOGIN;
-- Create qgis schema for qgis projects
CREATE SCHEMA IF NOT EXISTS qgis;
COMMENT ON SCHEMA qgis IS 'Hold qgis_projects';
GRANT USAGE ON SCHEMA qgis TO qgis_role;
CREATE TABLE qgis.qgis_projects (
"name" text NOT NULL,
metadata jsonb NULL,
"content" bytea NULL,
CONSTRAINT qgis_projects_pkey PRIMARY KEY (name)
);
-- Description
COMMENT ON TABLE
qgis.qgis_projects
IS 'Store qgis projects using QGIS-Server or QGIS-Desktop from https://qgis.org/';
GRANT SELECT,INSERT,UPDATE,DELETE ON TABLE qgis.qgis_projects TO qgis_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO qgis_role;
-- allow anonymous access to tbl and views
GRANT SELECT ON ALL TABLES IN SCHEMA api TO api_anonymous;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO api_anonymous;
-- Allow EXECUTE on all FUNCTIONS on API and public schema to user_role
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_role;
-- Allow EXECUTE on all FUNCTIONS on public schema to vessel_role
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO vessel_role;
-- Update version
UPDATE public.app_settings
SET value='0.7.5'
WHERE "name"='app.version';
\c postgres
-- Update video cronjob
UPDATE cron.job
SET command='select public.cron_process_video_fn()'
WHERE jobname = 'cron_new_video';
UPDATE cron.job
SET jobname='cron_video'
WHERE command='select public.cron_process_video_fn()';

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,693 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration September 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Add new email template account_inactivity
INSERT INTO public.email_templates ("name",email_subject,email_content,pushover_title,pushover_message)
VALUES ('inactivity','We Haven''t Seen You in a While!','Hi __RECIPIENT__,
You''re busy. We understand.
You haven''t logged into PostgSail for a considerable period. Since we last saw you, we have continued to add new and exciting features to help you explorer your navigation journey.
Meanwhile, we have cleanup your data. If you wish to maintain an up-to-date overview of your sail journey in PostgSail''''s dashboard, kindly log in to your account within the next seven days.
Please note that your account will be permanently deleted if it remains inactive for seven more days.
If you have any questions or concerns or if you believe this to be an error, please do not hesitate to reach out at info@openplotter.cloud.
Sincerely,
Francois','We Haven''t Seen You in a While!','You haven''t logged into PostgSail for a considerable period. Login to check what''s new!.');
-- Update HTML email for new logbook
DROP FUNCTION IF EXISTS public.send_email_py_fn;
CREATE OR REPLACE FUNCTION public.send_email_py_fn(IN email_type TEXT, IN _user JSONB, IN app JSONB) RETURNS void
AS $send_email_py$
# Import smtplib for the actual sending function
import smtplib
import requests
# Import the email modules we need
from email.message import EmailMessage
from email.utils import formatdate,make_msgid
from email.mime.text import MIMEText
# Use the shared cache to avoid preparing the email metadata
if email_type in SD:
plan = SD[email_type]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT * FROM email_templates WHERE name = $1", ["text"])
SD[email_type] = plan
# Execute the statement with the email_type param and limit to 1 result
rv = plpy.execute(plan, [email_type], 1)
email_subject = rv[0]['email_subject']
email_content = rv[0]['email_content']
# Replace fields using input jsonb obj
if not _user or not app:
plpy.notice('send_email_py_fn Parameters [{}] [{}]'.format(_user, app))
plpy.error('Error missing parameters')
return None
if 'logbook_name' in _user and _user['logbook_name']:
email_content = email_content.replace('__LOGBOOK_NAME__', str(_user['logbook_name']))
if 'logbook_link' in _user and _user['logbook_link']:
email_content = email_content.replace('__LOGBOOK_LINK__', str(_user['logbook_link']))
if 'logbook_img' in _user and _user['logbook_img']:
email_content = email_content.replace('__LOGBOOK_IMG__', str(_user['logbook_img']))
if 'logbook_stats' in _user and _user['logbook_stats']:
email_content = email_content.replace('__LOGBOOK_STATS__', str(_user['logbook_stats']))
if 'video_link' in _user and _user['video_link']:
email_content = email_content.replace('__VIDEO_LINK__', str(_user['video_link']))
if 'recipient' in _user and _user['recipient']:
email_content = email_content.replace('__RECIPIENT__', _user['recipient'])
if 'boat' in _user and _user['boat']:
email_content = email_content.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
email_content = email_content.replace('__BADGE_NAME__', _user['badge'])
if 'otp_code' in _user and _user['otp_code']:
email_content = email_content.replace('__OTP_CODE__', _user['otp_code'])
if 'reset_qs' in _user and _user['reset_qs']:
email_content = email_content.replace('__RESET_QS__', _user['reset_qs'])
if 'alert' in _user and _user['alert']:
email_content = email_content.replace('__ALERT__', _user['alert'])
if 'app.url' in app and app['app.url']:
email_content = email_content.replace('__APP_URL__', app['app.url'])
email_from = 'root@localhost'
if 'app.email_from' in app and app['app.email_from']:
email_from = 'PostgSail <' + app['app.email_from'] + '>'
#plpy.notice('Sending email from [{}] [{}]'.format(email_from, app['app.email_from']))
email_to = 'root@localhost'
if 'email' in _user and _user['email']:
email_to = _user['email']
#plpy.notice('Sending email to [{}] [{}]'.format(email_to, _user['email']))
else:
plpy.error('Error email to')
return None
if email_type == 'logbook':
msg = EmailMessage()
msg.set_content(email_content)
else:
msg = MIMEText(email_content, 'plain', 'utf-8')
msg["Subject"] = email_subject
msg["From"] = email_from
msg["To"] = email_to
msg["Date"] = formatdate()
msg["Message-ID"] = make_msgid()
if email_type == 'logbook' and 'logbook_img' in _user and _user['logbook_img']:
# Create a Content-ID for the image
image_cid = make_msgid()
# Transform to HTML template, replace text by HTML link
logbook_link = "{__APP_URL__}/log/{__LOGBOOK_LINK__}".format( __APP_URL__=app['app.url'], __LOGBOOK_LINK__=str(_user['logbook_link']))
timelapse_link = "{__APP_URL__}/timelapse/{__LOGBOOK_LINK__}".format( __APP_URL__=app['app.url'], __LOGBOOK_LINK__=str(_user['logbook_link']))
email_content = email_content.replace('\n', '<br/>')
email_content = email_content.replace(logbook_link, '<a href="{logbook_link}">{logbook_link}</a>'.format(logbook_link=str(logbook_link)))
email_content = email_content.replace(timelapse_link, '<a href="{timelapse_link}">{timelapse_link}</a>'.format(timelapse_link=str(logbook_link)))
email_content = email_content.replace(str(_user['logbook_name']), '<a href="{logbook_link}">{logbook_name}</a>'.format(logbook_link=str(logbook_link), logbook_name=str(_user['logbook_name'])))
# Set an alternative html body
msg.add_alternative("""\
<html>
<body>
<p>{email_content}</p>
<img src="cid:{image_cid}">
</body>
</html>
""".format(email_content=email_content, image_cid=image_cid[1:-1]), subtype='html')
img_url = 'https://gis.openplotter.cloud/{}'.format(str(_user['logbook_img']))
response = requests.get(img_url, stream=True)
if response.status_code == 200:
msg.get_payload()[1].add_related(response.raw.data,
maintype='image',
subtype='png',
cid=image_cid)
server_smtp = 'localhost'
if 'app.email_server' in app and app['app.email_server']:
server_smtp = app['app.email_server']
#plpy.notice('Sending server [{}] [{}]'.format(server_smtp, app['app.email_server']))
# Send the message via our own SMTP server.
try:
# send your message with credentials specified above
with smtplib.SMTP(server_smtp, 587) as server:
if 'app.email_user' in app and app['app.email_user'] \
and 'app.email_pass' in app and app['app.email_pass']:
server.starttls()
server.login(app['app.email_user'], app['app.email_pass'])
#server.send_message(msg)
server.sendmail(msg["From"], msg["To"], msg.as_string())
server.quit()
# tell the script to report if your message was sent or which errors need to be fixed
plpy.notice('Sent email successfully to [{}] [{}]'.format(msg["To"], msg["Subject"]))
return None
except OSError as error:
plpy.error('OS Error occurred: ' + str(error))
except smtplib.SMTPConnectError:
plpy.error('Failed to connect to the server. Bad connection settings?')
except smtplib.SMTPServerDisconnected:
plpy.error('Failed to connect to the server. Wrong user/password?')
except smtplib.SMTPException as e:
plpy.error('SMTP error occurred: ' + str(e))
$send_email_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.send_email_py_fn
IS 'Send email notification using plpython3u';
-- Update stats_logs_fn, update debug
CREATE OR REPLACE FUNCTION api.stats_logs_fn(start_date text DEFAULT NULL::text, end_date text DEFAULT NULL::text, OUT stats jsonb)
RETURNS jsonb
LANGUAGE plpgsql
AS $function$
DECLARE
_start_date TIMESTAMPTZ DEFAULT '1970-01-01';
_end_date TIMESTAMPTZ DEFAULT NOW();
BEGIN
IF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
RAISE WARNING '--> stats_logs_fn, filter result stats by date [%]', start_date;
_start_date := start_date::TIMESTAMPTZ;
_end_date := end_date::TIMESTAMPTZ;
END IF;
--RAISE NOTICE '--> stats_logs_fn, _start_date [%], _end_date [%]', _start_date, _end_date;
WITH
meta AS (
SELECT m.name FROM api.metadata m ),
logs_view AS (
SELECT *
FROM api.logbook l
WHERE _from_time >= _start_date::TIMESTAMPTZ
AND _to_time <= _end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
),
first_date AS (
SELECT _from_time as first_date from logs_view ORDER BY first_date ASC LIMIT 1
),
last_date AS (
SELECT _to_time as last_date from logs_view ORDER BY _to_time DESC LIMIT 1
),
max_speed_id AS (
SELECT id FROM logs_view WHERE max_speed = (SELECT max(max_speed) FROM logs_view) ),
max_wind_speed_id AS (
SELECT id FROM logs_view WHERE max_wind_speed = (SELECT max(max_wind_speed) FROM logs_view)),
max_distance_id AS (
SELECT id FROM logs_view WHERE distance = (SELECT max(distance) FROM logs_view)),
max_duration_id AS (
SELECT id FROM logs_view WHERE duration = (SELECT max(duration) FROM logs_view)),
logs_stats AS (
SELECT
count(*) AS count,
max(max_speed) AS max_speed,
max(max_wind_speed) AS max_wind_speed,
max(distance) AS max_distance,
sum(distance) AS sum_distance,
max(duration) AS max_duration,
sum(duration) AS sum_duration
FROM logs_view l )
--select * from logbook;
-- Return a JSON
SELECT jsonb_build_object(
'name', meta.name,
'first_date', first_date.first_date,
'last_date', last_date.last_date,
'max_speed_id', max_speed_id.id,
'max_wind_speed_id', max_wind_speed_id.id,
'max_duration_id', max_duration_id.id,
'max_distance_id', max_distance_id.id)::jsonb || to_jsonb(logs_stats.*)::jsonb INTO stats
FROM max_speed_id, max_wind_speed_id, max_distance_id, max_duration_id,
logs_stats, meta, logs_view, first_date, last_date;
END;
$function$
;
-- Fix stays and moorage statistics for user by date
CREATE OR REPLACE FUNCTION api.stats_stays_fn(
IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL,
OUT stats JSON) RETURNS JSON AS $stats_stays$
DECLARE
_start_date TIMESTAMPTZ DEFAULT '1970-01-01';
_end_date TIMESTAMPTZ DEFAULT NOW();
BEGIN
IF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
RAISE NOTICE '--> stats_stays_fn, custom filter result stats by date [%]', start_date;
_start_date := start_date::TIMESTAMPTZ;
_end_date := end_date::TIMESTAMPTZ;
END IF;
--RAISE NOTICE '--> stats_stays_fn, _start_date [%], _end_date [%]', _start_date, _end_date;
WITH
stays as (
select distinct(moorage_id) as moorage_id, sum(duration) as duration, count(id) as reference_count
from api.stays s
WHERE arrived >= _start_date::TIMESTAMPTZ
AND departed <= _end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
group by moorage_id
order by moorage_id
),
moorages AS (
SELECT m.id, m.home_flag, m.reference_count, m.stay_duration, m.stay_code, m.country, s.duration, s.reference_count
from api.moorages m, stays s
where s.moorage_id = m.id
order by moorage_id
),
home_ports AS (
select count(*) as home_ports from moorages m where home_flag is true
),
unique_moorages AS (
select count(*) as unique_moorages from moorages m
),
time_at_home_ports AS (
select sum(m.stay_duration) as time_at_home_ports from moorages m where home_flag is true
),
sum_stay_duration AS (
select sum(m.stay_duration) as sum_stay_duration from moorages m where home_flag is false
),
time_spent_away_arr AS (
select m.stay_code,sum(m.stay_duration) as stay_duration from moorages m where home_flag is false group by m.stay_code order by m.stay_code
),
time_spent_arr as (
select jsonb_agg(t.*) as time_spent_away_arr from time_spent_away_arr t
),
time_spent_away AS (
select sum(m.stay_duration) as time_spent_away from moorages m where home_flag is false
),
time_spent as (
select jsonb_agg(t.*) as time_spent_away from time_spent_away t
)
-- Return a JSON
SELECT jsonb_build_object(
'home_ports', home_ports.home_ports,
'unique_moorages', unique_moorages.unique_moorages,
'time_at_home_ports', time_at_home_ports.time_at_home_ports,
'sum_stay_duration', sum_stay_duration.sum_stay_duration,
'time_spent_away', time_spent_away.time_spent_away,
'time_spent_away_arr', time_spent_arr.time_spent_away_arr) INTO stats
FROM home_ports, unique_moorages,
time_at_home_ports, sum_stay_duration, time_spent_away, time_spent_arr;
END;
$stats_stays$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.stats_stays_fn
IS 'Stays/Moorages stats by date';
-- Update api.stats_moorages_view, fix time_spent_at_home_port
CREATE OR REPLACE VIEW api.stats_moorages_view WITH (security_invoker=true,security_barrier=true) AS
WITH
home_ports AS (
select count(*) as home_ports from api.moorages m where home_flag is true
),
unique_moorage AS (
select count(*) as unique_moorage from api.moorages m
),
time_at_home_ports AS (
select sum(m.stay_duration) as time_at_home_ports from api.moorages m where home_flag is true
),
time_spent_away AS (
select sum(m.stay_duration) as time_spent_away from api.moorages m where home_flag is false
)
SELECT
home_ports.home_ports as "home_ports",
unique_moorage.unique_moorage as "unique_moorages",
time_at_home_ports.time_at_home_ports as "time_spent_at_home_port(s)",
time_spent_away.time_spent_away as "time_spent_away"
FROM home_ports, unique_moorage, time_at_home_ports, time_spent_away;
-- Add stats_fn, user statistics by date
DROP FUNCTION IF EXISTS api.stats_fn;
CREATE OR REPLACE FUNCTION api.stats_fn(
IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL,
OUT stats JSONB) RETURNS JSONB AS $stats_global$
DECLARE
_start_date TIMESTAMPTZ DEFAULT '1970-01-01';
_end_date TIMESTAMPTZ DEFAULT NOW();
stats_logs JSONB;
stats_moorages JSONB;
stats_logs_topby JSONB;
stats_moorages_topby JSONB;
BEGIN
IF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
RAISE WARNING '--> stats_fn, filter result stats by date [%]', start_date;
_start_date := start_date::TIMESTAMPTZ;
_end_date := end_date::TIMESTAMPTZ;
END IF;
RAISE NOTICE '--> stats_fn, _start_date [%], _end_date [%]', _start_date, _end_date;
-- Get global logs statistics
SELECT api.stats_logs_fn(_start_date::TEXT, _end_date::TEXT) INTO stats_logs;
-- Get global stays/moorages statistics
SELECT api.stats_stays_fn(_start_date::TEXT, _end_date::TEXT) INTO stats_moorages;
-- Get Top 5 trips statistics
WITH
logs_view AS (
SELECT id,avg_speed,max_speed,max_wind_speed,distance,duration
FROM api.logbook l
WHERE _from_time >= _start_date::TIMESTAMPTZ
AND _to_time <= _end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
),
logs_top_avg_speed AS (
SELECT id,avg_speed FROM logs_view
GROUP BY id,avg_speed
ORDER BY avg_speed DESC
LIMIT 5),
logs_top_speed AS (
SELECT id,max_speed FROM logs_view
WHERE max_speed IS NOT NULL
GROUP BY id,max_speed
ORDER BY max_speed DESC
LIMIT 5),
logs_top_wind_speed AS (
SELECT id,max_wind_speed FROM logs_view
WHERE max_wind_speed IS NOT NULL
GROUP BY id,max_wind_speed
ORDER BY max_wind_speed DESC
LIMIT 5),
logs_top_distance AS (
SELECT id FROM logs_view
GROUP BY id,distance
ORDER BY distance DESC
LIMIT 5),
logs_top_duration AS (
SELECT id FROM logs_view
GROUP BY id,duration
ORDER BY duration DESC
LIMIT 5)
-- Stats Top Logs
SELECT jsonb_build_object(
'stats_logs', stats_logs,
'stats_moorages', stats_moorages,
'logs_top_speed', (SELECT jsonb_agg(logs_top_speed.*) FROM logs_top_speed),
'logs_top_avg_speed', (SELECT jsonb_agg(logs_top_avg_speed.*) FROM logs_top_avg_speed),
'logs_top_wind_speed', (SELECT jsonb_agg(logs_top_wind_speed.*) FROM logs_top_wind_speed),
'logs_top_distance', (SELECT jsonb_agg(logs_top_distance.id) FROM logs_top_distance),
'logs_top_duration', (SELECT jsonb_agg(logs_top_duration.id) FROM logs_top_duration)
) INTO stats;
-- Stats top 5 moorages statistics
WITH
stays as (
select distinct(moorage_id) as moorage_id, sum(duration) as duration, count(id) as reference_count
from api.stays s
WHERE s.arrived >= _start_date::TIMESTAMPTZ
AND s.departed <= _end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
group by s.moorage_id
order by s.moorage_id
),
moorages AS (
SELECT m.id, m.home_flag, m.reference_count, m.stay_duration, m.stay_code, m.country, s.duration as dur, s.reference_count as ref_count
from api.moorages m, stays s
where s.moorage_id = m.id
order by s.moorage_id
),
moorages_top_arrivals AS (
SELECT id,ref_count FROM moorages
GROUP BY id,ref_count
ORDER BY ref_count DESC
LIMIT 5),
moorages_top_duration AS (
SELECT id,dur FROM moorages
GROUP BY id,dur
ORDER BY dur DESC
LIMIT 5),
moorages_countries AS (
SELECT DISTINCT(country) FROM moorages
WHERE country IS NOT NULL AND country <> 'unknown'
GROUP BY country
ORDER BY country DESC
LIMIT 5)
SELECT stats || jsonb_build_object(
'moorages_top_arrivals', (SELECT jsonb_agg(moorages_top_arrivals) FROM moorages_top_arrivals),
'moorages_top_duration', (SELECT jsonb_agg(moorages_top_duration) FROM moorages_top_duration),
'moorages_top_countries', (SELECT jsonb_agg(moorages_countries.country) FROM moorages_countries)
) INTO stats;
END;
$stats_global$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.stats_fn
IS 'Stats logbook and moorages by date';
-- Add mapgl_fn, generate a geojson with all linestring
DROP FUNCTION IF EXISTS api.mapgl_fn;
CREATE OR REPLACE FUNCTION api.mapgl_fn(start_log integer DEFAULT NULL::integer, end_log integer DEFAULT NULL::integer, start_date text DEFAULT NULL::text, end_date text DEFAULT NULL::text, OUT geojson jsonb)
RETURNS jsonb
AS $mapgl$
DECLARE
_geojson jsonb;
BEGIN
-- Using sub query to force id order by time
-- Extract GeoJSON LineString and merge into a new GeoJSON
--raise WARNING 'input % % %' , start_log, end_log, public.isnumeric(end_log::text);
IF start_log IS NOT NULL AND end_log IS NULL THEN
end_log := start_log;
END IF;
IF start_date IS NOT NULL AND end_date IS NULL THEN
end_date := start_date;
END IF;
--raise WARNING 'input % % %' , start_log, end_log, public.isnumeric(end_log::text);
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'LineString'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l.id >= start_log
AND l.id <= end_log
AND l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'LineString';
ELSIF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'LineString'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l._from_time >= start_date::TIMESTAMPTZ
AND l._to_time <= end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
AND l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'LineString';
ELSE
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'LineString'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'LineString';
END IF;
-- Generate the GeoJSON with all moorages
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', _geojson || ( SELECT
jsonb_agg(ST_AsGeoJSON(m.*)::JSONB) as moorages_geojson
FROM
( SELECT
id,name,stay_code,
EXTRACT(DAY FROM justify_hours ( stay_duration )) AS Total_Stay,
geog
FROM api.moorages
WHERE geog IS NOT null
) AS m
) ) INTO geojson;
END;
$mapgl$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.mapgl_fn
IS 'Get all logbook LineString alone with all moorages into a geojson to be process by DeckGL';
-- Refresh user_role permissions
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
-- Add cron_inactivity_fn, cleanup all data for inactive users and vessels
CREATE OR REPLACE FUNCTION public.cron_inactivity_fn()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE
no_activity_rec record;
user_settings jsonb;
total_metrics INTEGER;
del_metrics INTEGER;
out_json JSONB;
BEGIN
-- List accounts with vessel inactivity for more than 200 DAYS
-- List accounts with no vessel created for more than 200 DAYS
-- List accounts with no vessel metadata for more than 200 DAYS
-- Check for users and vessels with no activity for more than 200 days
-- remove data and notify user
RAISE NOTICE 'cron_inactivity_fn';
FOR no_activity_rec in
with accounts as (
SELECT a.email,a.first,a.last,
(a.updated_at < NOW() AT TIME ZONE 'UTC' - INTERVAL '200 DAYS') as no_account_activity,
COALESCE((m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '200 DAYS'),true) as no_metadata_activity,
m.vessel_id IS null as no_metadata_vesssel_id,
m.time IS null as no_metadata_time,
v.vessel_id IS null as no_vessel_vesssel_id,
a.preferences->>'ip' as ip,v.name as user_vesssel,
m.name as sk_vesssel,v.vessel_id as v_vessel_id,m.vessel_id as m_vessel_id,
a.created_at as account_created,m.time as metadata_updated_at,
v.created_at as vessel_created,v.updated_at as vessel_updated_at
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
order by a.created_at asc
)
select * from accounts a where
(no_account_activity is true
or no_vessel_vesssel_id is true
or no_metadata_activity is true
or no_metadata_vesssel_id is true
or no_metadata_time is true )
ORDER BY a.account_created asc
LOOP
RAISE NOTICE '-> cron_inactivity_fn for [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_inactivity_fn user_settings [%]', user_settings;
IF no_activity_rec.no_vessel_vesssel_id is true then
PERFORM send_notification_fn('no_vessel'::TEXT, user_settings::JSONB);
ELSIF no_activity_rec.no_metadata_vesssel_id is true then
PERFORM send_notification_fn('no_metadata'::TEXT, user_settings::JSONB);
ELSIF no_activity_rec.no_metadata_activity is true then
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
ELSIF no_activity_rec.no_account_activity is true then
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
END IF;
-- Send notification
PERFORM send_notification_fn('inactivity'::TEXT, user_settings::JSONB);
-- Delete vessel metrics
IF no_activity_rec.v_vessel_id IS NOT NULL THEN
SELECT count(*) INTO total_metrics from api.metrics where vessel_id = no_activity_rec.v_vessel_id;
WITH deleted AS (delete from api.metrics m where vessel_id = no_activity_rec.v_vessel_id RETURNING *) SELECT count(*) INTO del_metrics FROM deleted;
SELECT jsonb_build_object('total_metrics', total_metrics, 'del_metrics', del_metrics) INTO out_json;
RAISE NOTICE '-> debug cron_inactivity_fn [%]', out_json;
END IF;
END LOOP;
END;
$function$
;
COMMENT ON FUNCTION public.cron_inactivity_fn() IS 'init by pg_cron, check for vessel with no activity for more than 230 days then send notification';
-- Add cron_deactivated_fn, delete all data for inactive users and vessels
CREATE OR REPLACE FUNCTION public.cron_deactivated_fn()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE
no_activity_rec record;
user_settings jsonb;
del_vessel_data JSONB;
del_meta INTEGER;
del_vessel INTEGER;
del_account INTEGER;
out_json JSONB;
BEGIN
RAISE NOTICE 'cron_deactivated_fn';
-- List accounts with vessel inactivity for more than 230 DAYS
-- List accounts with no vessel created for more than 230 DAYS
-- List accounts with no vessel metadata for more than 230 DAYS
-- Remove data and remove user and notify user
FOR no_activity_rec in
with accounts as (
SELECT a.email,a.first,a.last,
(a.updated_at < NOW() AT TIME ZONE 'UTC' - INTERVAL '230 DAYS') as no_account_activity,
COALESCE((m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '230 DAYS'),true) as no_metadata_activity,
m.vessel_id IS null as no_metadata_vesssel_id,
m.time IS null as no_metadata_time,
v.vessel_id IS null as no_vessel_vesssel_id,
a.preferences->>'ip' as ip,v.name as user_vesssel,
m.name as sk_vesssel,v.vessel_id as v_vessel_id,m.vessel_id as m_vessel_id,
a.created_at as account_created,m.time as metadata_updated_at,
v.created_at as vessel_created,v.updated_at as vessel_updated_at
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
order by a.created_at asc
)
select * from accounts a where
(no_account_activity is true
or no_vessel_vesssel_id is true
or no_metadata_activity is true
or no_metadata_vesssel_id is true
or no_metadata_time is true )
ORDER BY a.account_created asc
LOOP
RAISE NOTICE '-> cron_deactivated_fn for [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_deactivated_fn user_settings [%]', user_settings;
IF no_activity_rec.no_vessel_vesssel_id is true then
PERFORM send_notification_fn('no_vessel'::TEXT, user_settings::JSONB);
ELSIF no_activity_rec.no_metadata_vesssel_id is true then
PERFORM send_notification_fn('no_metadata'::TEXT, user_settings::JSONB);
ELSIF no_activity_rec.no_metadata_activity is true then
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
ELSIF no_activity_rec.no_account_activity is true then
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
END IF;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
-- Delete vessel data
IF no_activity_rec.v_vessel_id IS NOT NULL THEN
SELECT public.delete_vessel_fn(no_activity_rec.v_vessel_id) INTO del_vessel_data;
WITH deleted AS (delete from api.metadata where vessel_id = no_activity_rec.v_vessel_id RETURNING *) SELECT count(*) INTO del_meta FROM deleted;
SELECT jsonb_build_object('del_metadata', del_meta) || del_vessel_data INTO del_vessel_data;
RAISE NOTICE '-> debug cron_deactivated_fn [%]', del_vessel_data;
END IF;
-- Delete account data
WITH deleted AS (delete from auth.vessels where owner_email = no_activity_rec.email RETURNING *) SELECT count(*) INTO del_vessel FROM deleted;
WITH deleted AS (delete from auth.accounts where email = no_activity_rec.email RETURNING *) SELECT count(*) INTO del_account FROM deleted;
SELECT jsonb_build_object('del_account', del_account, 'del_vessel', del_vessel) || del_vessel_data INTO out_json;
RAISE NOTICE '-> debug cron_deactivated_fn [%]', out_json;
-- TODO remove keycloak and grafana provisioning
END LOOP;
END;
$function$
;
COMMENT ON FUNCTION public.cron_deactivated_fn() IS 'init by pg_cron, check for vessel with no activity for more than 230 then send notification and delete account and vessel data';
-- Remove unused and duplicate function
DROP FUNCTION IF EXISTS public.cron_process_no_activity_fn;
DROP FUNCTION IF EXISTS public.cron_process_inactivity_fn;
DROP FUNCTION IF EXISTS public.cron_process_deactivated_fn;
-- Update version
UPDATE public.app_settings
SET value='0.7.7'
WHERE "name"='app.version';
\c postgres

View File

@@ -0,0 +1,253 @@
---------------------------------------------------------------------------
-- Copyright 2021-2024 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration October 2024
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Update moorages map, export more properties (notes,reference_count) from moorages tbl
CREATE OR REPLACE FUNCTION api.export_moorages_geojson_fn(OUT geojson jsonb)
RETURNS jsonb
LANGUAGE plpgsql
AS $function$
DECLARE
BEGIN
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features',
( SELECT
json_agg(ST_AsGeoJSON(m.*)::JSON) as moorages_geojson
FROM
( SELECT
id,name,stay_code,notes,reference_count,
EXTRACT(DAY FROM justify_hours ( stay_duration )) AS Total_Stay,
geog
FROM api.moorages
WHERE geog IS NOT NULL
) AS m
)
) INTO geojson;
END;
$function$
;
COMMENT ON FUNCTION api.export_moorages_geojson_fn(out jsonb) IS 'Export moorages as geojson';
-- Update mapgl_fn, update moorages map sub query to export more properties (notes,reference_count) from moorages tbl
DROP FUNCTION IF EXISTS api.mapgl_fn;
CREATE OR REPLACE FUNCTION api.mapgl_fn(start_log integer DEFAULT NULL::integer, end_log integer DEFAULT NULL::integer, start_date text DEFAULT NULL::text, end_date text DEFAULT NULL::text, OUT geojson jsonb)
RETURNS jsonb
AS $mapgl$
DECLARE
_geojson jsonb;
BEGIN
-- Using sub query to force id order by time
-- Extract GeoJSON LineString and merge into a new GeoJSON
--raise WARNING 'input % % %' , start_log, end_log, public.isnumeric(end_log::text);
IF start_log IS NOT NULL AND end_log IS NULL THEN
end_log := start_log;
END IF;
IF start_date IS NOT NULL AND end_date IS NULL THEN
end_date := start_date;
END IF;
--raise WARNING 'input % % %' , start_log, end_log, public.isnumeric(end_log::text);
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'LineString'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l.id >= start_log
AND l.id <= end_log
AND l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'LineString';
ELSIF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'LineString'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l._from_time >= start_date::TIMESTAMPTZ
AND l._to_time <= end_date::TIMESTAMPTZ + interval '23 hours 59 minutes'
AND l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'LineString';
ELSE
SELECT jsonb_agg(
jsonb_build_object('type', 'Feature',
'properties', f->'properties',
'geometry', jsonb_build_object( 'coordinates', f->'geometry'->'coordinates', 'type', 'LineString'))
) INTO _geojson
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook l
WHERE l.track_geojson IS NOT NULL
ORDER BY l._from_time ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'LineString';
END IF;
-- Generate the GeoJSON with all moorages
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', _geojson || ( SELECT
jsonb_agg(ST_AsGeoJSON(m.*)::JSONB) as moorages_geojson
FROM
( SELECT
id,name,stay_code,notes,reference_count,
EXTRACT(DAY FROM justify_hours ( stay_duration )) AS Total_Stay,
geog
FROM api.moorages
WHERE geog IS NOT null
) AS m
) ) INTO geojson;
END;
$mapgl$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.mapgl_fn
IS 'Generate a geojson with all logs as geometry LineString with moorages as geometry Point to be process by DeckGL';
-- Update logbook_update_geojson_fn, fix corrupt linestring properties
CREATE OR REPLACE FUNCTION public.logbook_update_geojson_fn(_id integer, _start text, _end text, OUT _track_geojson json)
RETURNS json
LANGUAGE plpgsql
AS $function$
declare
log_geojson jsonb;
metrics_geojson jsonb;
_map jsonb;
begin
-- GeoJson Feature Logbook linestring
SELECT
ST_AsGeoJSON(log.*) into log_geojson
FROM
( SELECT
id,name,
distance,
duration,
avg_speed,
max_speed,
max_wind_speed,
_from_time,
_to_time,
_from_moorage_id,
_to_moorage_id,
notes,
extra['avg_wind_speed'] as avg_wind_speed,
track_geom
FROM api.logbook
WHERE id = _id
) AS log;
-- GeoJson Feature Metrics point
SELECT
json_agg(ST_AsGeoJSON(t.*)::json) into metrics_geojson
FROM (
( SELECT
time,
courseovergroundtrue,
speedoverground,
windspeedapparent,
longitude,latitude,
'' AS notes,
coalesce(metersToKnots((metrics->'environment.wind.speedTrue')::NUMERIC), null) as truewindspeed,
coalesce(radiantToDegrees((metrics->'environment.wind.directionTrue')::NUMERIC), null) as truewinddirection,
coalesce(status, null) as status,
st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND time >= _start::TIMESTAMPTZ
AND time <= _end::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false)
ORDER BY m.time ASC
)
) AS t;
-- Merge jsonb
SELECT log_geojson::jsonb || metrics_geojson::jsonb into _map;
-- output
SELECT
json_build_object(
'type', 'FeatureCollection',
'features', _map
) into _track_geojson;
END;
$function$
;
COMMENT ON FUNCTION public.logbook_update_geojson_fn(in int4, in text, in text, out json) IS 'Update log details with geojson';
-- Add trigger to update logbook stats from user edit geojson
DROP FUNCTION IF EXISTS public.update_logbook_with_geojson_trigger_fn;
CREATE OR REPLACE FUNCTION public.update_logbook_with_geojson_trigger_fn() RETURNS TRIGGER AS $$
DECLARE
geojson JSONB;
feature JSONB;
BEGIN
-- Parse the incoming GeoJSON data from the track_geojson column
geojson := NEW.track_geojson::jsonb;
-- Extract the first feature (assume it is the LineString)
feature := geojson->'features'->0;
IF geojson IS NOT NULL AND feature IS NOT NULL AND (feature->'properties' ? 'x-update') THEN
-- Get properties from the feature to extract avg_speed, and max_speed
NEW.avg_speed := (feature->'properties'->>'avg_speed')::FLOAT;
NEW.max_speed := (feature->'properties'->>'max_speed')::FLOAT;
NEW.max_wind_speed := (feature->'properties'->>'max_wind_speed')::FLOAT;
NEW.extra := jsonb_set( NEW.extra,
'{avg_wind_speed}',
to_jsonb((feature->'properties'->>'avg_wind_speed')::FLOAT),
true -- this flag means it will create the key if it does not exist
);
-- Calculate the LineString's actual spatial distance
NEW.track_geom := ST_GeomFromGeoJSON(feature->'geometry'::text);
NEW.distance := TRUNC (ST_Length(NEW.track_geom,false)::INT * 0.0005399568, 4); -- convert to NM
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.update_logbook_with_geojson_trigger_fn
IS 'Extracts specific properties (distance, duration, avg_speed, max_speed) from a geometry LINESTRING part of a GeoJSON FeatureCollection, and then updates a column in a table named logbook';
-- Add trigger on logbook update to update metrics from track_geojson
CREATE TRIGGER update_logbook_with_geojson_trigger_fn
BEFORE UPDATE OF track_geojson ON api.logbook
FOR EACH ROW
WHEN (NEW.track_geojson IS DISTINCT FROM OLD.track_geojson)
EXECUTE FUNCTION public.update_logbook_with_geojson_trigger_fn();
-- Refresh user_role permissions
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
-- Update version
UPDATE public.app_settings
SET value='0.7.8'
WHERE "name"='app.version';
\c postgres

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,219 @@
---------------------------------------------------------------------------
-- Copyright 2021-2025 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration January-March 2025
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Update metadata table, mark client_id as deprecated
COMMENT ON COLUMN api.metadata.client_id IS 'Deprecated client_id to be removed';
-- Update metrics table, mark client_id as deprecated
COMMENT ON COLUMN api.metrics.client_id IS 'Deprecated client_id to be removed';
-- Update metadata table update configuration column type to jsonb and comment
ALTER TABLE api.metadata ALTER COLUMN "configuration" TYPE jsonb USING "configuration"::jsonb;
COMMENT ON COLUMN api.metadata.configuration IS 'Signalk path mapping for metrics';
-- Update metadata table add new column available_keys and comment
ALTER TABLE api.metadata ADD available_keys jsonb NULL;
COMMENT ON COLUMN api.metadata.available_keys IS 'Signalk paths with unit for custom mapping';
--DROP FUNCTION public.metadata_upsert_trigger_fn();
-- Update metadata_upsert_trigger_fn to metadata table to support configuration and available_keys and deprecated client_id
CREATE OR REPLACE FUNCTION public.metadata_upsert_trigger_fn()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
DECLARE
metadata_id integer;
metadata_active boolean;
BEGIN
-- Require Signalk plugin version 0.4.0
-- Set client_id to new value to allow RLS
--PERFORM set_config('vessel.client_id', NEW.client_id, false);
-- UPSERT - Insert vs Update for Metadata
--RAISE NOTICE 'metadata_upsert_trigger_fn';
--PERFORM set_config('vessel.id', NEW.vessel_id, true);
--RAISE WARNING 'metadata_upsert_trigger_fn [%] [%]', current_setting('vessel.id', true), NEW;
SELECT m.id,m.active INTO metadata_id, metadata_active
FROM api.metadata m
WHERE m.vessel_id IS NOT NULL AND m.vessel_id = current_setting('vessel.id', true);
--RAISE NOTICE 'metadata_id is [%]', metadata_id;
IF metadata_id IS NOT NULL THEN
-- send notification if boat is back online
IF metadata_active is False THEN
-- Add monitor online entry to process queue for later notification
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('monitoring_online', metadata_id, now(), current_setting('vessel.id', true));
END IF;
-- Update vessel metadata
UPDATE api.metadata
SET
name = NEW.name,
mmsi = NEW.mmsi,
--client_id = NEW.client_id,
length = NEW.length,
beam = NEW.beam,
height = NEW.height,
ship_type = NEW.ship_type,
plugin_version = NEW.plugin_version,
signalk_version = NEW.signalk_version,
platform = REGEXP_REPLACE(NEW.platform, '[^a-zA-Z0-9\(\) ]', '', 'g'),
-- configuration = NEW.configuration, -- ignore configuration from vessel, it is manage by user
-- time = NEW.time, ignore the time sent by the vessel as it is out of sync sometimes.
time = NOW(), -- overwrite the time sent by the vessel
available_keys = NEW.available_keys,
active = true
WHERE id = metadata_id;
RETURN NULL; -- Ignore insert
ELSE
IF NEW.vessel_id IS NULL THEN
-- set vessel_id from jwt if not present in INSERT query
NEW.vessel_id := current_setting('vessel.id');
END IF;
-- Ignore and overwrite the time sent by the vessel
NEW.time := NOW();
-- Insert new vessel metadata
RETURN NEW; -- Insert new vessel metadata
END IF;
END;
$function$
;
COMMENT ON FUNCTION public.metadata_upsert_trigger_fn() IS 'process metadata from vessel, upsert';
-- Create or replace the function that will be executed by the trigger
-- Add metadata table trigger for update_metadata_configuration
CREATE OR REPLACE FUNCTION api.update_metadata_configuration()
RETURNS TRIGGER AS $$
BEGIN
-- Require Signalk plugin version 0.4.0
-- Update the configuration field with current date in ISO format
-- Using jsonb_set if configuration is already a JSONB field
IF NEW.configuration IS NOT NULL AND
jsonb_typeof(NEW.configuration) = 'object' THEN
NEW.configuration = jsonb_set(
NEW.configuration,
'{update_at}',
to_jsonb(to_char(NOW(), 'YYYY-MM-DD"T"HH24:MI:SS"Z"'))
);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION api.update_metadata_configuration() IS 'Update the configuration field with current date in ISO format';
-- Create the trigger
CREATE TRIGGER metadata_update_configuration_trigger
BEFORE UPDATE ON api.metadata
FOR EACH ROW
EXECUTE FUNCTION api.update_metadata_configuration();
-- Update api.export_logbook_geojson_linestring_trip_fn, add metadata properties
CREATE OR REPLACE FUNCTION api.export_logbooks_geojson_linestring_trips_fn(
start_log integer DEFAULT NULL::integer,
end_log integer DEFAULT NULL::integer,
start_date text DEFAULT NULL::text,
end_date text DEFAULT NULL::text,
OUT geojson jsonb
) RETURNS jsonb
LANGUAGE plpgsql
AS $function$
DECLARE
logs_geojson jsonb;
BEGIN
-- Normalize start and end values
IF start_log IS NOT NULL AND end_log IS NULL THEN end_log := start_log; END IF;
IF start_date IS NOT NULL AND end_date IS NULL THEN end_date := start_date; END IF;
WITH logbook_data AS (
-- get the logbook geometry and metadata, an array for each log
SELECT id, name,
starttimestamp(trip),
endtimestamp(trip),
--speed(trip_sog),
duration(trip),
--length(trip) as length, -- Meters
(length(trip) * 0.0005399568)::numeric as distance, -- NM
twavg(trip_sog) as avg_sog,
maxValue(trip_sog) as max_sog,
maxValue(trip_depth) as max_depth, -- Depth
maxValue(trip_batt_charge) as max_batt_charge, -- Battery Charge
maxValue(trip_batt_voltage) as max_batt_voltage, -- Battery Voltage
maxValue(trip_temp_water) as max_temp_water, -- Temperature water
maxValue(trip_temp_out) as max_temp_out, -- Temperature outside
maxValue(trip_pres_out) as max_pres_out, -- Pressure outside
maxValue(trip_hum_out) as max_hum_out, -- Humidity outside
twavg(trip_depth) as avg_depth, -- Depth
twavg(trip_batt_charge) as avg_batt_charge, -- Battery Charge
twavg(trip_batt_voltage) as avg_batt_voltage, -- Battery Voltage
twavg(trip_temp_water) as avg_temp_water, -- Temperature water
twavg(trip_temp_out) as avg_temp_out, -- Temperature outside
twavg(trip_pres_out) as avg_pres_out, -- Pressure outside
twavg(trip_hum_out) as avg_hum_out, -- Humidity outside
trajectory(l.trip)::geometry as track_geog -- extract trip to geography
FROM api.logbook l
WHERE (start_log IS NULL OR l.id >= start_log) AND
(end_log IS NULL OR l.id <= end_log) AND
(start_date IS NULL OR l._from_time >= start_date::TIMESTAMPTZ) AND
(end_date IS NULL OR l._to_time <= end_date::TIMESTAMPTZ + interval '23 hours 59 minutes') AND
l.trip IS NOT NULL
ORDER BY l._from_time ASC
),
collect as (
SELECT ST_Collect(
ARRAY(
SELECT track_geog FROM logbook_data))
)
-- Create the GeoJSON response
SELECT jsonb_build_object(
'type', 'FeatureCollection',
'features', json_agg(ST_AsGeoJSON(logs.*)::json)) INTO geojson FROM logbook_data logs;
END;
$function$;
-- Description
COMMENT ON FUNCTION api.export_logbooks_geojson_linestring_trips_fn IS 'Generate geojson geometry LineString from trip with the corresponding properties';
-- Add public.get_season, return the season based on the input date for logbook tag
CREATE OR REPLACE FUNCTION public.get_season(input_date TIMESTAMPTZ)
RETURNS TEXT AS $$
BEGIN
CASE
WHEN (EXTRACT(MONTH FROM input_date) = 3 AND EXTRACT(DAY FROM input_date) >= 1) OR
(EXTRACT(MONTH FROM input_date) BETWEEN 4 AND 5) THEN
RETURN 'Spring';
WHEN (EXTRACT(MONTH FROM input_date) = 6 AND EXTRACT(DAY FROM input_date) >= 1) OR
(EXTRACT(MONTH FROM input_date) BETWEEN 7 AND 8) THEN
RETURN 'Summer';
WHEN (EXTRACT(MONTH FROM input_date) = 9 AND EXTRACT(DAY FROM input_date) >= 1) OR
(EXTRACT(MONTH FROM input_date) BETWEEN 10 AND 11) THEN
RETURN 'Fall';
ELSE
RETURN 'Winter';
END CASE;
END;
$$ LANGUAGE plpgsql IMMUTABLE;
-- Refresh permissions
GRANT SELECT ON TABLE api.metrics,api.metadata TO scheduler;
GRANT INSERT, UPDATE, SELECT ON TABLE api.logbook,api.moorages,api.stays TO scheduler;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO scheduler;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO scheduler;
GRANT SELECT, UPDATE ON TABLE public.process_queue TO scheduler;
-- Update version
UPDATE public.app_settings
SET value='0.9.0'
WHERE "name"='app.version';

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,734 @@
---------------------------------------------------------------------------
-- Copyright 2021-2025 Francois Lacroix <xbgmsharp@gmail.com>
-- This file is part of PostgSail which is released under Apache License, Version 2.0 (the "License").
-- See file LICENSE or go to http://www.apache.org/licenses/LICENSE-2.0 for full license details.
--
-- Migration June/July 2025
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
\echo 'Timing mode is enabled'
\timing
\echo 'Force timezone, just in case'
set timezone to 'UTC';
-- Update plugin upgrade message
UPDATE public.email_templates
SET email_content='Hello __RECIPIENT__,
Please upgrade your postgsail signalk plugin. Make sure you restart your Signalk instance after upgrading. Be sure to contact me if you encounter any issue.'
WHERE "name"='skplugin_upgrade';
-- DROP FUNCTION api.login(text, text);
-- Update api.login, update the connected_at field to the current time
CREATE OR REPLACE FUNCTION api.login(email text, pass text)
RETURNS auth.jwt_token
LANGUAGE plpgsql
SECURITY DEFINER
AS $function$
declare
_role name;
result auth.jwt_token;
app_jwt_secret text;
_email_valid boolean := false;
_email text := email;
_user_id text := null;
_user_disable boolean := false;
headers json := current_setting('request.headers', true)::json;
client_ip text := coalesce(headers->>'x-client-ip', NULL);
begin
-- check email and password
select auth.user_role(email, pass) into _role;
if _role is null then
-- HTTP/403
--raise invalid_password using message = 'invalid user or password';
-- HTTP/401
raise insufficient_privilege using message = 'invalid user or password';
end if;
-- Check if user is disable due to abuse
SELECT preferences['disable'],user_id INTO _user_disable,_user_id
FROM auth.accounts a
WHERE a.email = _email;
IF _user_disable is True then
-- due to the raise, the insert is never committed.
--INSERT INTO process_queue (channel, payload, stored, ref_id)
-- VALUES ('account_disable', _email, now(), _user_id);
RAISE sqlstate 'PT402' using message = 'Account disable, contact us',
detail = 'Quota exceeded',
hint = 'Upgrade your plan';
END IF;
-- Check email_valid and generate OTP
SELECT preferences['email_valid'],user_id INTO _email_valid,_user_id
FROM auth.accounts a
WHERE a.email = _email;
IF _email_valid is null or _email_valid is False THEN
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('email_otp', _email, now(), _user_id);
END IF;
-- Track IP per user to avoid abuse
--RAISE WARNING 'api.login debug: [%],[%]', client_ip, login.email;
IF client_ip IS NOT NULL THEN
UPDATE auth.accounts a SET
preferences = jsonb_recursive_merge(a.preferences, jsonb_build_object('ip', client_ip)),
connected_at = NOW()
WHERE a.email = login.email;
END IF;
-- Get app_jwt_secret
SELECT value INTO app_jwt_secret
FROM app_settings
WHERE name = 'app.jwt_secret';
--RAISE WARNING 'api.login debug: [%],[%],[%]', app_jwt_secret, _role, login.email;
-- Generate jwt
select jwt.sign(
-- row_to_json(r), ''
-- row_to_json(r)::json, current_setting('app.jwt_secret')::text
row_to_json(r)::json, app_jwt_secret
) as token
from (
select _role as role, login.email as email, -- TODO replace with user_id
-- select _role as role, user_id as uid, -- add support in check_jwt
extract(epoch from now())::integer + 60*60 as exp
) r
into result;
return result;
end;
$function$
;
-- Description
COMMENT ON FUNCTION api.login IS 'Handle user login, returns a JWT token with user role and email.';
-- DROP FUNCTION api.monitoring_history_fn(in text, out jsonb);
-- Update monitoring_history_fn to use custom user settings for metrics
CREATE OR REPLACE FUNCTION api.monitoring_history_fn(time_interval text DEFAULT '24'::text, OUT history_metrics jsonb)
RETURNS jsonb
LANGUAGE plpgsql
AS $function$
DECLARE
bucket_interval interval := '5 minutes';
BEGIN
RAISE NOTICE '-> monitoring_history_fn';
SELECT CASE time_interval
WHEN '24' THEN '5 minutes'
WHEN '48' THEN '2 hours'
WHEN '72' THEN '4 hours'
WHEN '168' THEN '7 hours'
ELSE '5 minutes'
END bucket INTO bucket_interval;
RAISE NOTICE '-> monitoring_history_fn % %', time_interval, bucket_interval;
WITH history_table AS (
SELECT time_bucket(bucket_interval::INTERVAL, mt.time) AS time_bucket,
avg(-- Water Temperature
COALESCE(
mt.metrics->'water'->>'temperature',
mt.metrics->>(md.configuration->>'waterTemperatureKey'),
mt.metrics->>'environment.water.temperature'
)::FLOAT) AS waterTemperature,
avg(-- Inside Temperature
COALESCE(
mt.metrics->'temperature'->>'inside',
mt.metrics->>(md.configuration->>'insideTemperatureKey'),
mt.metrics->>'environment.inside.temperature'
)::FLOAT) AS insideTemperature,
avg(-- Outside Temperature
COALESCE(
mt.metrics->'temperature'->>'outside',
mt.metrics->>(md.configuration->>'outsideTemperatureKey'),
mt.metrics->>'environment.outside.temperature'
)::FLOAT) AS outsideTemperature,
avg(-- Wind Speed True
COALESCE(
mt.metrics->'wind'->>'speed',
mt.metrics->>(md.configuration->>'windSpeedKey'),
mt.metrics->>'environment.wind.speedTrue'
)::FLOAT) AS windSpeedOverGround,
avg(-- Inside Humidity
COALESCE(
mt.metrics->'humidity'->>'inside',
mt.metrics->>(md.configuration->>'insideHumidityKey'),
mt.metrics->>'environment.inside.relativeHumidity',
mt.metrics->>'environment.inside.humidity'
)::FLOAT) AS insideHumidity,
avg(-- Outside Humidity
COALESCE(
mt.metrics->'humidity'->>'outside',
mt.metrics->>(md.configuration->>'outsideHumidityKey'),
mt.metrics->>'environment.outside.relativeHumidity',
mt.metrics->>'environment.outside.humidity'
)::FLOAT) AS outsideHumidity,
avg(-- Outside Pressure
COALESCE(
mt.metrics->'pressure'->>'outside',
mt.metrics->>(md.configuration->>'outsidePressureKey'),
mt.metrics->>'environment.outside.pressure'
)::FLOAT) AS outsidePressure,
avg(--Inside Pressure
COALESCE(
mt.metrics->'pressure'->>'inside',
mt.metrics->>(md.configuration->>'insidePressureKey'),
mt.metrics->>'environment.inside.pressure'
)::FLOAT) AS insidePressure,
avg(-- Battery Charge (State of Charge)
COALESCE(
mt.metrics->'battery'->>'charge',
mt.metrics->>(md.configuration->>'stateOfChargeKey'),
mt.metrics->>'electrical.batteries.House.capacity.stateOfCharge'
)::FLOAT) AS batteryCharge,
avg(-- Battery Voltage
COALESCE(
mt.metrics->'battery'->>'voltage',
mt.metrics->>(md.configuration->>'voltageKey'),
mt.metrics->>'electrical.batteries.House.voltage'
)::FLOAT) AS batteryVoltage,
avg(-- Water Depth
COALESCE(
mt.metrics->'water'->>'depth',
mt.metrics->>(md.configuration->>'depthKey'),
mt.metrics->>'environment.depth.belowTransducer'
)::FLOAT) AS depth
FROM api.metrics mt
JOIN api.metadata md ON md.vessel_id = mt.vessel_id
WHERE mt.time > (NOW() AT TIME ZONE 'UTC' - INTERVAL '1 hours' * time_interval::NUMERIC)
GROUP BY time_bucket
ORDER BY time_bucket asc
)
SELECT jsonb_agg(history_table) INTO history_metrics FROM history_table;
END
$function$
;
-- Description
COMMENT ON FUNCTION api.monitoring_history_fn(in text, out jsonb) IS 'Export metrics from a time period 24h, 48h, 72h, 7d';
-- DROP FUNCTION public.cron_alerts_fn();
-- Update cron_alerts_fn to check for alerts, filters out empty strings (""), so they are not included in the result.
CREATE OR REPLACE FUNCTION public.cron_alerts_fn()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE
alert_rec record;
default_last_metric TIMESTAMPTZ := NOW() - interval '1 day';
last_metric TIMESTAMPTZ;
metric_rec record;
app_settings JSONB;
user_settings JSONB;
alerting JSONB;
_alarms JSONB;
alarms TEXT;
alert_default JSONB := '{
"low_pressure_threshold": 990,
"high_wind_speed_threshold": 30,
"low_water_depth_threshold": 1,
"min_notification_interval": 6,
"high_pressure_drop_threshold": 12,
"low_battery_charge_threshold": 90,
"low_battery_voltage_threshold": 12.5,
"low_water_temperature_threshold": 10,
"low_indoor_temperature_threshold": 7,
"low_outdoor_temperature_threshold": 3
}';
BEGIN
-- Check for new event notification pending update
RAISE NOTICE 'cron_alerts_fn';
FOR alert_rec in
SELECT
a.user_id,a.email,v.vessel_id,
COALESCE((a.preferences->'alert_last_metric')::TEXT, default_last_metric::TEXT) as last_metric,
(alert_default || ( -- Filters out empty strings (""), so they are not included in the result.
SELECT jsonb_object_agg(key, value)
FROM jsonb_each(a.preferences->'alerting')
WHERE value <> '""'
)) as alerting,
(a.preferences->'alarms')::JSONB as alarms,
m.configuration as config
FROM auth.accounts a
LEFT JOIN auth.vessels AS v ON v.owner_email = a.email
LEFT JOIN api.metadata AS m ON m.vessel_id = v.vessel_id
WHERE (a.preferences->'alerting'->'enabled')::boolean = True
AND m.active = True
LOOP
RAISE NOTICE '-> cron_alerts_fn for [%]', alert_rec;
PERFORM set_config('vessel.id', alert_rec.vessel_id, false);
PERFORM set_config('user.email', alert_rec.email, false);
--RAISE WARNING 'public.cron_process_alert_rec_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(alert_rec.vessel_id::TEXT);
RAISE NOTICE '-> cron_alerts_fn checking user_settings [%]', user_settings;
-- Get all metrics from the last last_metric avg by 5 minutes
FOR metric_rec in
SELECT time_bucket('5 minutes', m.time) AS time_bucket,
avg(-- Inside Temperature
COALESCE(
mt.metrics->'temperature'->>'inside',
mt.metrics->>(md.configuration->>'insideTemperatureKey'),
mt.metrics->>'environment.inside.temperature'
)::FLOAT) AS intemp,
avg(-- Wind Speed True
COALESCE(
mt.metrics->'wind'->>'speed',
mt.metrics->>(md.configuration->>'windSpeedKey'),
mt.metrics->>'environment.wind.speedTrue'
)::FLOAT) AS wind,
avg(-- Water Depth
COALESCE(
mt.metrics->'water'->>'depth',
mt.metrics->>(md.configuration->>'depthKey'),
mt.metrics->>'environment.depth.belowTransducer'
)::FLOAT) AS watdepth,
avg(-- Outside Temperature
COALESCE(
m.metrics->'temperature'->>'outside',
m.metrics->>(alert_rec.config->>'outsideTemperatureKey'),
m.metrics->>'environment.outside.temperature'
)::NUMERIC) AS outtemp,
avg(-- Water Temperature
COALESCE(
m.metrics->'water'->>'temperature',
m.metrics->>(alert_rec.config->>'waterTemperatureKey'),
m.metrics->>'environment.water.temperature'
)::NUMERIC) AS wattemp,
avg(-- Outside Pressure
COALESCE(
m.metrics->'pressure'->>'outside',
m.metrics->>(alert_rec.config->>'outsidePressureKey'),
m.metrics->>'environment.outside.pressure'
)::NUMERIC) AS pressure,
avg(-- Battery Voltage
COALESCE(
m.metrics->'battery'->>'voltage',
m.metrics->>(alert_rec.config->>'voltageKey'),
m.metrics->>'electrical.batteries.House.voltage'
)::NUMERIC) AS voltage,
avg(-- Battery Charge (State of Charge)
COALESCE(
m.metrics->'battery'->>'charge',
m.metrics->>(alert_rec.config->>'stateOfChargeKey'),
m.metrics->>'electrical.batteries.House.capacity.stateOfCharge'
)::NUMERIC) AS charge
FROM api.metrics m
WHERE vessel_id = alert_rec.vessel_id
AND m.time >= alert_rec.last_metric::TIMESTAMPTZ
GROUP BY time_bucket
ORDER BY time_bucket ASC LIMIT 100
LOOP
RAISE NOTICE '-> cron_alerts_fn checking metrics [%]', metric_rec;
RAISE NOTICE '-> cron_alerts_fn checking alerting [%]', alert_rec.alerting;
--RAISE NOTICE '-> cron_alerts_fn checking debug [%] [%]', kelvinToCel(metric_rec.intemp), (alert_rec.alerting->'low_indoor_temperature_threshold');
IF metric_rec.intemp IS NOT NULL AND public.kelvintocel(metric_rec.intemp::NUMERIC) < (alert_rec.alerting->'low_indoor_temperature_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug indoor_temp [%]', (alert_rec.alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug indoor_temp [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_indoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_indoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_indoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.intemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.intemp) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_indoor_temperature_threshold';
END IF;
IF metric_rec.outtemp IS NOT NULL AND public.kelvintocel(metric_rec.outtemp::NUMERIC) < (alert_rec.alerting->>'low_outdoor_temperature_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug outdoor_temp [%]', (alert_rec.alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug outdoor_temp [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_outdoor_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_outdoor_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_outdoor_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.outtemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_outdoor_temperature_threshold value:'|| kelvinToCel(metric_rec.outtemp) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_outdoor_temperature_threshold';
END IF;
IF metric_rec.wattemp IS NOT NULL AND public.kelvintocel(metric_rec.wattemp::NUMERIC) < (alert_rec.alerting->>'low_water_temperature_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug water_temp [%]', (alert_rec.alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug water_temp [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_temperature_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_temperature_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_temperature_threshold": {"value": '|| kelvinToCel(metric_rec.wattemp) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_temperature_threshold value:'|| kelvinToCel(metric_rec.wattemp) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_temperature_threshold';
END IF;
IF metric_rec.watdepth IS NOT NULL AND metric_rec.watdepth::NUMERIC < (alert_rec.alerting->'low_water_depth_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug water_depth [%]', (alert_rec.alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug water_depth [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_water_depth_threshold'->>'date') IS NULL) OR
(((_alarms->'low_water_depth_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_water_depth_threshold": {"value": '|| metric_rec.watdepth ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_water_depth_threshold value:'|| ROUND(metric_rec.watdepth,2) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_water_depth_threshold';
END IF;
if metric_rec.pressure IS NOT NULL AND metric_rec.pressure::NUMERIC < (alert_rec.alerting->'high_pressure_drop_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug pressure [%]', (alert_rec.alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug pressure [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_pressure_drop_threshold'->>'date') IS NULL) OR
(((_alarms->'high_pressure_drop_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_pressure_drop_threshold": {"value": '|| metric_rec.pressure ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_pressure_drop_threshold value:'|| ROUND(metric_rec.pressure,2) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_pressure_drop_threshold';
END IF;
IF metric_rec.wind IS NOT NULL AND metric_rec.wind::NUMERIC > (alert_rec.alerting->'high_wind_speed_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug wind [%]', (alert_rec.alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug wind [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'high_wind_speed_threshold'->>'date') IS NULL) OR
(((_alarms->'high_wind_speed_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"high_wind_speed_threshold": {"value": '|| metric_rec.wind ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "high_wind_speed_threshold value:'|| ROUND(metric_rec.wind,2) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug high_wind_speed_threshold';
END IF;
IF metric_rec.voltage IS NOT NULL AND metric_rec.voltage::NUMERIC < (alert_rec.alerting->'low_battery_voltage_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug voltage [%]', (alert_rec.alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug voltage [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_voltage_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_voltage_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_voltage_threshold": {"value": '|| metric_rec.voltage ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_voltage_threshold value:'|| ROUND(metric_rec.voltage,2) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_voltage_threshold';
END IF;
IF metric_rec.charge IS NOT NULL AND (metric_rec.charge::NUMERIC*100) < (alert_rec.alerting->'low_battery_charge_threshold')::NUMERIC then
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', (alert_rec.alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ;
RAISE NOTICE '-> cron_alerts_fn checking debug [%]', metric_rec.time_bucket::TIMESTAMPTZ;
-- Get latest alarms
SELECT preferences->'alarms' INTO _alarms FROM auth.accounts a WHERE a.email = current_setting('user.email', false);
-- Is alarm in the min_notification_interval time frame
IF (
((_alarms->'low_battery_charge_threshold'->>'date') IS NULL) OR
(((_alarms->'low_battery_charge_threshold'->>'date')::TIMESTAMPTZ
+ ((interval '1 hour') * (alert_rec.alerting->>'min_notification_interval')::NUMERIC))
< metric_rec.time_bucket::TIMESTAMPTZ)
) THEN
-- Add alarm
alarms := '{"low_battery_charge_threshold": {"value": '|| (metric_rec.charge*100) ||', "date":"' || metric_rec.time_bucket || '"}}';
-- Merge alarms
SELECT public.jsonb_recursive_merge(_alarms::jsonb, alarms::jsonb) into _alarms;
-- Update alarms for user
PERFORM api.update_user_preferences_fn('{alarms}'::TEXT, _alarms::TEXT);
-- Gather user settings
user_settings := get_user_settings_from_vesselid_fn(current_setting('vessel.id', false));
SELECT user_settings::JSONB || ('{"alert": "low_battery_charge_threshold value:'|| ROUND(metric_rec.charge::NUMERIC*100,2) ||' date:'|| metric_rec.time_bucket ||' "}'::text)::JSONB into user_settings;
-- Send notification
PERFORM send_notification_fn('alert'::TEXT, user_settings::JSONB);
-- DEBUG
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold +interval';
END IF;
RAISE NOTICE '-> cron_alerts_fn checking debug low_battery_charge_threshold';
END IF;
-- Record last metrics time
SELECT metric_rec.time_bucket INTO last_metric;
END LOOP;
PERFORM api.update_user_preferences_fn('{alert_last_metric}'::TEXT, last_metric::TEXT);
END LOOP;
END;
$function$
;
-- Description
COMMENT ON FUNCTION public.cron_alerts_fn() IS 'init by pg_cron to check for alerts';
-- DROP FUNCTION public.process_pre_logbook_fn(int4);
-- Update process_pre_logbook_fn to detect and avoid logbook we more than 1000NM in less 15h
CREATE OR REPLACE FUNCTION public.process_pre_logbook_fn(_id integer)
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE
logbook_rec record;
avg_rec record;
geo_rec record;
_invalid_time boolean;
_invalid_interval boolean;
_invalid_distance boolean;
_invalid_ratio boolean;
count_metric numeric;
previous_stays_id numeric;
current_stays_departed text;
current_stays_id numeric;
current_stays_active boolean;
timebucket boolean;
BEGIN
-- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> process_pre_logbook_fn invalid input %', _id;
RETURN;
END IF;
-- Get the logbook record with all necessary fields exist
SELECT * INTO logbook_rec
FROM api.logbook
WHERE active IS false
AND id = _id
AND _from_lng IS NOT NULL
AND _from_lat IS NOT NULL
AND _to_lng IS NOT NULL
AND _to_lat IS NOT NULL;
-- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> process_pre_logbook_fn invalid logbook %', _id;
RETURN;
END IF;
PERFORM set_config('vessel.id', logbook_rec.vessel_id, false);
--RAISE WARNING 'public.process_logbook_queue_fn() scheduler vessel.id %, user.id', current_setting('vessel.id', false), current_setting('user.id', false);
-- Check if all metrics are within 50meters base on geo loc
count_metric := logbook_metrics_dwithin_fn(logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT, logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
RAISE NOTICE '-> process_pre_logbook_fn logbook_metrics_dwithin_fn count:[%]', count_metric;
-- Calculate logbook data average and geo
-- Update logbook entry with the latest metric data and calculate data
avg_rec := logbook_update_avg_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
geo_rec := logbook_update_geom_distance_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
-- Avoid/ignore/delete logbook stationary movement or time sync issue
-- Check time start vs end
SELECT logbook_rec._to_time::TIMESTAMPTZ < logbook_rec._from_time::TIMESTAMPTZ INTO _invalid_time;
-- Is distance is less than 0.010
SELECT geo_rec._track_distance < 0.010 INTO _invalid_distance;
-- Is duration is less than 100sec
SELECT (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ) < (100::text||' secs')::interval INTO _invalid_interval;
-- If we have more than 800NM in less 15h
IF geo_rec._track_distance >= 800 AND (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ) < (15::text||' hours')::interval THEN
_invalid_distance := True;
_invalid_interval := True;
--RAISE NOTICE '-> process_pre_logbook_fn invalid logbook data id [%], _invalid_distance [%], _invalid_interval [%]', logbook_rec.id, _invalid_distance, _invalid_interval;
END IF;
-- If we have less than 20 metrics or less than 0.5NM or less than avg 0.5knts
-- Is within metrics represent more or equal than 60% of the total entry
IF count_metric::NUMERIC <= 20 OR geo_rec._track_distance < 0.5 OR avg_rec.avg_speed < 0.5 THEN
SELECT (count_metric::NUMERIC / avg_rec.count_metric::NUMERIC) >= 0.60 INTO _invalid_ratio;
END IF;
-- if stationary fix data metrics,logbook,stays,moorage
IF _invalid_time IS True OR _invalid_distance IS True
OR _invalid_interval IS True OR count_metric = avg_rec.count_metric
OR _invalid_ratio IS True
OR avg_rec.count_metric <= 3 THEN
RAISE NOTICE '-> process_pre_logbook_fn invalid logbook data id [%], _invalid_time [%], _invalid_distance [%], _invalid_interval [%], count_metric_in_zone [%], count_metric_log [%], _invalid_ratio [%]',
logbook_rec.id, _invalid_time, _invalid_distance, _invalid_interval, count_metric, avg_rec.count_metric, _invalid_ratio;
-- Update metrics status to moored
UPDATE api.metrics
SET status = 'moored'
WHERE time >= logbook_rec._from_time::TIMESTAMPTZ
AND time <= logbook_rec._to_time::TIMESTAMPTZ
AND vessel_id = current_setting('vessel.id', false);
-- Update logbook
UPDATE api.logbook
SET notes = 'invalid logbook data, stationary need to fix metrics?'
WHERE vessel_id = current_setting('vessel.id', false)
AND id = logbook_rec.id;
-- Get related stays
SELECT id,departed,active INTO current_stays_id,current_stays_departed,current_stays_active
FROM api.stays s
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived = logbook_rec._to_time::TIMESTAMPTZ;
-- Update related stays
UPDATE api.stays s
SET notes = 'invalid stays data, stationary need to fix metrics?'
WHERE vessel_id = current_setting('vessel.id', false)
AND arrived = logbook_rec._to_time::TIMESTAMPTZ;
-- Find previous stays
SELECT id INTO previous_stays_id
FROM api.stays s
WHERE s.vessel_id = current_setting('vessel.id', false)
AND s.arrived < logbook_rec._to_time::TIMESTAMPTZ
ORDER BY s.arrived DESC LIMIT 1;
-- Update previous stays with the departed time from current stays
-- and set the active state from current stays
UPDATE api.stays
SET departed = current_stays_departed::TIMESTAMPTZ,
active = current_stays_active
WHERE vessel_id = current_setting('vessel.id', false)
AND id = previous_stays_id;
-- Clean up, remove invalid logbook and stay entry
DELETE FROM api.logbook WHERE id = logbook_rec.id;
RAISE WARNING '-> process_pre_logbook_fn delete invalid logbook [%]', logbook_rec.id;
DELETE FROM api.stays WHERE id = current_stays_id;
RAISE WARNING '-> process_pre_logbook_fn delete invalid stays [%]', current_stays_id;
RETURN;
END IF;
--IF (logbook_rec.notes IS NULL) THEN -- run one time only
-- -- If duration is over 24h or number of entry is over 400, check for stays and potential multiple logs with stationary location
-- IF (logbook_rec._to_time::TIMESTAMPTZ - logbook_rec._from_time::TIMESTAMPTZ) > INTERVAL '24 hours'
-- OR avg_rec.count_metric > 400 THEN
-- timebucket := public.logbook_metrics_timebucket_fn('15 minutes'::TEXT, logbook_rec.id, logbook_rec._from_time::TIMESTAMPTZ, logbook_rec._to_time::TIMESTAMPTZ);
-- -- If true exit current process as the current logbook need to be re-process.
-- IF timebucket IS True THEN
-- RETURN;
-- END IF;
-- ELSE
-- timebucket := public.logbook_metrics_timebucket_fn('5 minutes'::TEXT, logbook_rec.id, logbook_rec._from_time::TIMESTAMPTZ, logbook_rec._to_time::TIMESTAMPTZ);
-- -- If true exit current process as the current logbook need to be re-process.
-- IF timebucket IS True THEN
-- RETURN;
-- END IF;
-- END IF;
--END IF;
-- Add logbook entry to process queue for later processing
INSERT INTO process_queue (channel, payload, stored, ref_id)
VALUES ('new_logbook', logbook_rec.id, NOW(), current_setting('vessel.id', true));
END;
$function$
;
COMMENT ON FUNCTION public.process_pre_logbook_fn(int4) IS 'Detect/Avoid/ignore/delete logbook stationary movement or time sync issue';
-- Revoke security definer
--ALTER FUNCTION api.update_logbook_observations_fn(_id integer, observations text) SECURITY INVOKER;
--ALTER FUNCTION api.delete_logbook_fn(_id integer) SECURITY INVOKER;
ALTER FUNCTION api.merge_logbook_fn(_id integer, _id integer) SECURITY INVOKER;
GRANT DELETE ON TABLE public.process_queue TO user_role;
GRANT SELECT ON ALL TABLES IN SCHEMA api TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_role;
GRANT UPDATE (status) ON api.metrics TO user_role;
GRANT UPDATE ON api.logbook TO user_role;
DROP POLICY IF EXISTS api_user_role ON api.metrics;
CREATE POLICY api_user_role ON api.metrics TO user_role
USING (vessel_id = current_setting('vessel.id', false))
WITH CHECK (vessel_id = current_setting('vessel.id', false));
-- Update version
UPDATE public.app_settings
SET value='0.9.3'
WHERE "name"='app.version';
--\c postgres
--UPDATE cron.job SET username = 'scheduler'; -- Update to scheduler
--UPDATE cron.job SET username = current_user WHERE jobname = 'cron_vacuum'; -- Update to superuser for vacuum permissions
--UPDATE cron.job SET username = current_user WHERE jobname = 'job_run_details_cleanup';

View File

@@ -17,6 +17,10 @@ INSERT INTO app_settings (name, value) VALUES
('app.pushover_app_token', '${PGSAIL_PUSHOVER_APP_TOKEN}'),
('app.pushover_app_url', '${PGSAIL_PUSHOVER_APP_URL}'),
('app.telegram_bot_token', '${PGSAIL_TELEGRAM_BOT_TOKEN}'),
('app.grafana_admin_uri', '${PGSAIL_GRAFANA_ADMIN_URI}'),
('app.keycloak_uri', '${PGSAIL_KEYCLOAK_URI}'),
('app.gis_url', '${PGSAIL_QGIS_URL}'),
('app.videos_url', '${PGSAIL_VIDEOS_URL}'),
('app.url', '${PGSAIL_APP_URL}'),
('app.version', '${PGSAIL_VERSION}');
-- Update comment with version
@@ -25,4 +29,8 @@ COMMENT ON DATABASE signalk IS 'PostgSail version ${PGSAIL_VERSION}';
ALTER ROLE authenticator WITH PASSWORD '${PGSAIL_AUTHENTICATOR_PASSWORD}';
ALTER ROLE grafana WITH PASSWORD '${PGSAIL_GRAFANA_PASSWORD}';
ALTER ROLE grafana_auth WITH PASSWORD '${PGSAIL_GRAFANA_AUTH_PASSWORD}';
ALTER ROLE qgis_role WITH PASSWORD '${PGSAIL_GRAFANA_AUTH_PASSWORD}';
ALTER ROLE maplapse_role WITH PASSWORD '${PGSAIL_GRAFANA_AUTH_PASSWORD}';
END
curl -s -XPOST -Hx-pgsail:${PGSAIL_VERSION} https://api.openplotter.cloud/rpc/telemetry_fn

View File

@@ -1 +1 @@
0.5.1
0.9.3

File diff suppressed because one or more lines are too long

View File

@@ -13,6 +13,6 @@ $ bash tests.sh
## docker
```bash
$ docker-compose up -d db && sleep 15 && docker-compose up -d api && sleep 5
$ docker-compose -f docker-compose.dev.yml -f docker-compose.yml up tests
$ docker compose up -d db && sleep 15 && docker compose up -d api && sleep 5
$ docker compose -f docker-compose.dev.yml -f docker-compose.yml up tests
```

View File

@@ -27,6 +27,7 @@ const metrics_aava = require('./metrics_sample_aava.json');
const fs = require('fs');
let configtime = new Date().toISOString();
// CNAMEs Array
[
@@ -34,12 +35,12 @@ const fs = require('fs');
{ cname: process.env.PGSAIL_API_URI, name: "PostgSail unit test kapla",
signin: { email: 'demo+kapla@openplotter.cloud', pass: 'test', firstname:'First_kapla', lastname:'Last_kapla'},
login: { email: 'demo+kapla@openplotter.cloud', pass: 'test'},
vessel: { vessel_email: "demo+kapla@openplotter.cloud", vessel_mmsi: "test", vessel_name: "kapla"},
vessel: { vessel_email: "demo+kapla@openplotter.cloud", vessel_mmsi: "test", vessel_name: " kapla "},
preferences: { key: '{email_notifications}', value: false }, /* Disable email_notifications */
vessel_metadata: {
name: "kapla",
mmsi: "123456789",
client_id: "vessels.urn:mrn:signalk:uuid:5b4f7543-7153-4840-b139-761310b242fd",
//client_id: "vessels.urn:mrn:signalk:uuid:5b4f7543-7153-4840-b139-761310b242fd",
length: "12",
beam: "10",
height: "24",
@@ -59,8 +60,8 @@ const fs = require('fs');
user_views: [
// not processed yet, { url: '/stays_view', res_body_length: 1},
// not processed yet, { url: '/moorages_view', res_body_length: 1},
{ url: '/logs_view', res_body_length: 0},
{ url: '/log_view', res_body_length: 2},
{ url: '/logs_view', res_body_length: 0}, // not processed yet so empty
{ url: '/log_view', res_body_length: 0}, // not processed yet so empty
//{ url: '/stats_view', res_body_length: 1},
{ url: '/vessels_view', res_body_length: 1},
],
@@ -89,7 +90,7 @@ const fs = require('fs');
*/
],
user_fn: [
{ url: '/rpc/timelapse_fn',
{ url: '/rpc/export_logbooks_geojson_point_trips_fn',
payload: {
start_log: 1
},
@@ -97,7 +98,7 @@ const fs = require('fs');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_geojson_fn',
{ url: '/rpc/export_logbook_geojson_trip_fn',
payload: {
_id: 1
},
@@ -105,7 +106,15 @@ const fs = require('fs');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_gpx_fn',
{ url: '/rpc/export_logbook_gpx_trip_fn',
payload: {
_id: 1
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_logbook_kml_trip_fn',
payload: {
_id: 1
},
@@ -169,17 +178,29 @@ const fs = require('fs');
obj_name: 'settings'
}
}
],
config_fn: [
{ url: '/metadata?select=configuration',
res: {
obj_name: 'configuration'
}
},
{ url: `/metadata?select=configuration&configuration->>update_at=gt.${configtime}`,
res: {
obj_name: 'settings'
}
},
]
},
{ cname: process.env.PGSAIL_API_URI, name: "PostgSail unit test, aava",
signin: { email: 'demo+aava@openplotter.cloud', pass: 'test', firstname:'first_aava', lastname:'last_aava'},
login: { email: 'demo+aava@openplotter.cloud', pass: 'test'},
vessel: { vessel_email: "demo+aava@openplotter.cloud", vessel_mmsi: "787654321", vessel_name: "aava"},
vessel: { vessel_email: "demo+aava@openplotter.cloud", vessel_mmsi: "787654321", vessel_name: " aava "},
preferences: { key: '{email_notifications}', value: false }, /* Disable email_notifications */
vessel_metadata: {
name: "aava",
mmsi: "787654321",
client_id: "vessels.urn:mrn:imo:mmsi:787654321",
mmsi: "n/a",
//client_id: "vessels.urn:mrn:imo:mmsi:787654321",
length: "12",
beam: "10",
height: "24",
@@ -198,8 +219,8 @@ const fs = require('fs');
user_views: [
// not processed yet, { url: '/stays_view', res_body_length: 1},
// not processed yet, { url: '/moorages_view', res_body_length: 1},
{ url: '/logs_view', res_body_length: 0},
{ url: '/log_view', res_body_length: 1},
{ url: '/logs_view', res_body_length: 0}, // not processed yet so empty
{ url: '/log_view', res_body_length: 0}, // not processed yet so empty
//{ url: '/stats_view', res_body_length: 1},
{ url: '/vessels_view', res_body_length: 1},
],
@@ -228,15 +249,15 @@ const fs = require('fs');
*/
],
user_fn: [
{ url: '/rpc/timelapse_fn',
{ url: '/rpc/export_logbooks_geojson_point_trips_fn',
payload: {
start_log: 3
start_log: 1
},
res: {
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_geojson_fn',
{ url: '/rpc/export_logbook_geojson_trip_fn',
payload: {
_id: 3
},
@@ -244,7 +265,15 @@ const fs = require('fs');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_gpx_fn',
{ url: '/rpc/export_logbook_gpx_trip_fn',
payload: {
_id: 3
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_logbook_kml_trip_fn',
payload: {
_id: 3
},
@@ -302,6 +331,30 @@ const fs = require('fs');
obj_name: 'settings'
}
},
],
config_fn: [
{ url: '/metadata?select=configuration',
res: {
obj_name: 'configuration'
}
},
{ url: `/metadata?select=configuration&configuration->>update_at=gt.${configtime}`,
res: {
obj_name: 'settings'
}
},
],
meta_ext_fn: [
{ url: '/metadata_ext?',
res: {
obj_name: 'configuration'
}
},
{ url: `/metadata_ext?`,
res: {
obj_name: 'image'
}
},
]
}
].forEach( function(test){
@@ -580,7 +633,7 @@ request.set('User-Agent', 'PostgSail unit tests');
.set('Authorization', `Bearer ${vessel_jwt}`)
.set('Accept', 'application/json')
.set('Content-Type', 'application/json')
.set('Prefer', 'return=headers-only,resolution=merge-duplicates')
.set('Prefer', 'missing=default,return=headers-only,resolution=merge-duplicates')
.end(function(err,res){
res.status.should.equal(201);
//console.log(res.header);
@@ -595,18 +648,28 @@ request.set('User-Agent', 'PostgSail unit tests');
describe("Vessel POST metrics, JWT vessel_role", function(){
let data = [];
//console.log(vessel_metrics['metrics'][0]);
//console.log(test.vessel_metrics['metrics'][0]);
let i;
for (i = 0; i < test.vessel_metrics['metrics'].length; i++) {
data[i] = test.vessel_metrics['metrics'][i];
// Override time, -2h to allow to new data later without delay.
data[i]['time'] = moment.utc().subtract(2, 'hours').add(i, 'minutes').format();
data[i]['time'] = moment.utc().subtract(1, 'day').add(i, 'minutes').format();
// Override client_id
data[i]['client_id'] = test.vessel_metadata.client_id;
//data[i]['client_id'] = test.vessel_metadata.client_id;
data[i]['client_id'] = null;
}
// Force last entry to be back in time from previous, it should be ignore silently
data.at(-1).time = moment.utc(data.at(-2).time).subtract(1, 'minutes').format();
// The last entry are invalid and should be ignore.
// - Invalid status
// - Invalid speedoverground
// - Invalid time previous time is duplicate
// Force last valid entry to be back in time from previous, it should be ignore silently
data.at(-1).time = moment.utc(data.at(-3).time).subtract(1, 'minutes').format();
//console.log(data[0]);
// Force the -2 entry to be in the future add 1 year, it should be ignore silently
data.splice(i-2, 1, data.at(-2))
data.at(-3).time = moment.utc(data.at(-3).time).add(1, 'year').format();
//console.log(data.at(-2));
//console.log(data.at(-1));
it('/metrics?select=time', function(done) {
request = supertest.agent(test.cname);
@@ -625,7 +688,8 @@ request.set('User-Agent', 'PostgSail unit tests');
res.header['content-type'].should.match(new RegExp('json','g'));
res.header['server'].should.match(new RegExp('postgrest','g'));
should.exist(res.body);
res.body.length.should.match(test.vessel_metrics['metrics'].length-1);
//console.log(res.body);
res.body.length.should.match(test.vessel_metrics['metrics'].length-3);
done(err);
});
});
@@ -817,6 +881,37 @@ request.set('User-Agent', 'PostgSail unit tests');
}); // Function OTP endpoint
*/
describe("Function Metadata configuration endpoint, JWT vessel_role", function(){
let otp = null;
test.config_fn.forEach(function (subtest) {
it(`${subtest.url}`, function(done) {
try {
//console.log(`${subtest.url} ${subtest.res}`);
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.get(subtest.url)
.set('Authorization', `Bearer ${vessel_jwt}`)
.set('Accept', 'application/json')
.end(function(err,res){
res.status.should.equal(200);
should.exist(res.header['content-type']);
should.exist(res.header['server']);
res.header['content-type'].should.match(new RegExp('json','g'));
res.header['server'].should.match(new RegExp('postgrest','g'));
console.log(res.body);
should.exist(res.body);
done(err);
});
}
catch (error) {
done();
}
});
});
}); // Function metadata configuration endpoint
}); // OpenAPI description
}); // CNAMEs Array

View File

@@ -6,7 +6,7 @@
*
* npm install supertest should mocha mochawesome moment
* alias mocha="./node_modules/mocha/bin/_mocha"
* mocha index.js --reporter mochawesome --reporter-options reportDir=/mnt/postgsail/,reportFilename=report_api.html
* mocha index2.js --reporter mochawesome --reporter-options reportDir=/mnt/postgsail/,reportFilename=report_api.html
*
*/
@@ -28,14 +28,15 @@ const metrics_simulator = require('./metrics_sample_simulator.json');
vessel_metadata: {
name: "aava",
mmsi: "787654321",
client_id: "vessels.urn:mrn:imo:mmsi:787654321",
//client_id: "vessels.urn:mrn:imo:mmsi:787654321",
length: "12",
beam: "10",
height: "24",
ship_type: "37",
plugin_version: "1.0.2",
signalk_version: "1.20.0",
time: moment().subtract(69, 'minutes').format()
time: moment().subtract(69, 'minutes').format(),
available_keys: [],
},
vessel_metrics: metrics_simulator,
user_tables: [
@@ -48,7 +49,7 @@ const metrics_simulator = require('./metrics_sample_simulator.json');
// not processed yet, { url: '/stays_view', res_body_length: 1},
// not processed yet, { url: '/moorages_view', res_body_length: 1},
{ url: '/logs_view', res_body_length: 1},
{ url: '/log_view', res_body_length: 2},
{ url: '/log_view', res_body_length: 0}, // not processed yet so empty
//{ url: '/stats_view', res_body_length: 1},
{ url: '/vessels_view', res_body_length: 1},
],
@@ -77,7 +78,7 @@ const metrics_simulator = require('./metrics_sample_simulator.json');
*/
],
user_fn: [
{ url: '/rpc/timelapse_fn',
{ url: '/rpc/export_logbooks_geojson_point_trips_fn',
payload: {
start_log: 4
},
@@ -85,7 +86,7 @@ const metrics_simulator = require('./metrics_sample_simulator.json');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_geojson_fn',
{ url: '/rpc/export_logbook_geojson_trip_fn',
payload: {
_id: 4
},
@@ -93,7 +94,15 @@ const metrics_simulator = require('./metrics_sample_simulator.json');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_gpx_fn',
{ url: '/rpc/export_logbook_gpx_trip_fn',
payload: {
_id: 4
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_logbook_kml_trip_fn',
payload: {
_id: 4
},
@@ -404,18 +413,19 @@ request.set('User-Agent', 'PostgSail unit tests');
describe("Vessel POST metadata, JWT vessel_role", function(){
it('/metadata', function(done) {
it('/metadata?on_conflict=vessel_id', function(done) {
request = supertest.agent(test.cname);
request
.post('/metadata')
.post('/metadata?on_conflict=vessel_id')
.send(test.vessel_metadata)
.set('Authorization', `Bearer ${vessel_jwt}`)
.set('Accept', 'application/json')
.set('Content-Type', 'application/json')
.set('Prefer', 'return=headers-only')
.set('Prefer', 'missing=default,return=headers-only,resolution=merge-duplicates')
.end(function(err,res){
res.status.should.equal(201);
//console.log(res.body);
//console.log(res.header);
res.status.should.equal(200);
should.exist(res.header['server']);
res.header['server'].should.match(new RegExp('postgrest','g'));
done(err);
@@ -432,11 +442,12 @@ request.set('User-Agent', 'PostgSail unit tests');
for (i = 0; i < test.vessel_metrics['metrics'].length; i++) {
data[i] = test.vessel_metrics['metrics'][i];
// Override time, +1h because previous sample include 47 entry.
data[i]['time'] = moment().add(1, 'hour').add(i, 'minutes').format();
data[i]['time'] = moment.utc().subtract(2, 'hours').add(i, 'minutes').format();
// Override client_id
data[i]['client_id'] = test.vessel_metadata.client_id;
//data[i]['client_id'] = test.vessel_metadata.client_id;
data[i]['client_id'] = null;
}
console.log(data[0]);
//console.log(data[0]);
it('/metrics?select=time', function(done) {
request = supertest.agent(test.cname);

View File

@@ -6,7 +6,7 @@
*
* npm install supertest should mocha mochawesome moment
* alias mocha="./node_modules/mocha/bin/_mocha"
* mocha index.js --reporter mochawesome --reporter-options reportDir=/mnt/postgsail/,reportFilename=report_api.html
* mocha index3.js --reporter mochawesome --reporter-options reportDir=/mnt/postgsail/,reportFilename=report_api.html
*
*/
@@ -31,7 +31,7 @@ var moment = require('moment');
vessel_metadata: {
name: "kapla",
mmsi: "123456789",
client_id: "vessels.urn:mrn:imo:mmsi:123456789",
//client_id: "vessels.urn:mrn:imo:mmsi:123456789",
length: "12",
beam: "10",
height: "24",
@@ -49,7 +49,7 @@ var moment = require('moment');
],
user_views: [
{ url: '/stays_view', res_body_length: 2},
{ url: '/moorages_view', res_body_length: 2},
{ url: '/moorages_view', res_body_length: 3},
{ url: '/logs_view', res_body_length: 2},
{ url: '/log_view', res_body_length: 2},
//{ url: '/stats_view', res_body_length: 1},
@@ -79,7 +79,8 @@ var moment = require('moment');
}
],
user_fn: [
{ url: '/rpc/timelapse_fn',
{ //url: '/rpc/timelapse_fn',
url: '/rpc/export_logbooks_geojson_linestring_trips_fn',
payload: {
start_log: 2
},
@@ -87,7 +88,17 @@ var moment = require('moment');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_geojson_fn',
{ //url: '/rpc/timelapse_fn',
url: '/rpc/export_logbooks_geojson_point_trips_fn',
payload: {
start_log: 2
},
res: {
obj_name: 'geojson'
}
},
{ //url: '/rpc/export_logbook_geojson_fn',
url: '/rpc/export_logbook_geojson_trip_fn',
payload: {
_id: 2
},
@@ -95,7 +106,8 @@ var moment = require('moment');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_gpx_fn',
{ //url: '/rpc/export_logbook_gpx_fn',
url: '/rpc/export_logbook_kml_trip_fn',
payload: {
_id: 2
},
@@ -103,7 +115,8 @@ var moment = require('moment');
obj_name: null
}
},
{ url: '/rpc/export_logbook_kml_fn',
{ //url: '/rpc/export_logbook_kml_fn',
url: '/rpc/export_logbook_kml_trip_fn',
payload: {
_id: 2
},
@@ -123,6 +136,12 @@ var moment = require('moment');
obj_name: null
}
},
{ url: '/rpc/export_moorages_kml_fn',
payload: {},
res: {
obj_name: null
}
},
{ url: '/rpc/find_log_from_moorage_fn',
payload: {
_id: 2
@@ -230,7 +249,7 @@ var moment = require('moment');
vessel_metadata: {
name: "aava",
mmsi: "787654321",
client_id: "vessels.urn:mrn:imo:mmsi:787654321",
//client_id: "vessels.urn:mrn:imo:mmsi:787654321",
length: "12",
beam: "10",
height: "24",
@@ -247,7 +266,7 @@ var moment = require('moment');
],
user_views: [
{ url: '/stays_view', res_body_length: 2},
{ url: '/moorages_view', res_body_length: 2},
{ url: '/moorages_view', res_body_length: 4},
{ url: '/logs_view', res_body_length: 2},
{ url: '/log_view', res_body_length: 2},
//{ url: '/stats_view', res_body_length: 1},
@@ -277,7 +296,8 @@ var moment = require('moment');
}
],
user_fn: [
{ url: '/rpc/timelapse_fn',
{ //url: '/rpc/timelapse_fn',
url: '/rpc/export_logbooks_geojson_linestring_trips_fn',
payload: {
start_log: 4
},
@@ -285,7 +305,17 @@ var moment = require('moment');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_geojson_fn',
{ //url: '/rpc/timelapse_fn',
url: '/rpc/export_logbooks_geojson_point_trips_fn',
payload: {
start_log: 4
},
res: {
obj_name: 'geojson'
}
},
{ //url: '/rpc/export_logbook_geojson_fn',
url: '/rpc/export_logbook_geojson_trip_fn',
payload: {
_id: 4
},
@@ -293,7 +323,8 @@ var moment = require('moment');
obj_name: 'geojson'
}
},
{ url: '/rpc/export_logbook_gpx_fn',
{ //url: '/rpc/export_logbook_gpx_fn',
url: '/rpc/export_logbook_gpx_trip_fn',
payload: {
_id: 4
},
@@ -301,7 +332,8 @@ var moment = require('moment');
obj_name: null
}
},
{ url: '/rpc/export_logbook_kml_fn',
{ //url: '/rpc/export_logbook_kml_fn',
url: '/rpc/export_logbook_kml_trip_fn',
payload: {
_id: 4
},
@@ -309,7 +341,8 @@ var moment = require('moment');
obj_name: null
}
},
{ url: '/rpc/export_logbooks_gpx_fn',
{ //url: '/rpc/export_logbooks_gpx_fn',
url: '/rpc/export_logbooks_kml_trips_fn',
payload: {
start_log: 3,
end_log: 4
@@ -318,7 +351,8 @@ var moment = require('moment');
obj_name: null
}
},
{ url: '/rpc/export_logbooks_kml_fn',
{ //url: '/rpc/export_logbooks_kml_fn',
url: '/rpc/export_logbooks_kml_trips_fn',
payload: {
start_log: 3,
end_log: 4
@@ -339,6 +373,12 @@ var moment = require('moment');
obj_name: null
}
},
{ url: '/rpc/export_moorages_kml_fn',
payload: {},
res: {
obj_name: null
}
},
{ url: '/rpc/find_log_from_moorage_fn',
payload: {
_id: 4

View File

@@ -163,6 +163,10 @@ var moment = require("moment");
url: "/rpc/update_user_preferences_fn",
payload: { key: "{public_monitoring}", value: true },
},
{
url: "/rpc/update_user_preferences_fn",
payload: { key: "{public_timelapse}", value: true },
},
],
},
{
@@ -347,7 +351,7 @@ var moment = require("moment");
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
should.exist(res.body.token);
res.body.token.should.match(user_jwt);
//res.body.token.should.match(user_jwt);
console.log(user_jwt);
should.exist(user_jwt);
done(err);
@@ -685,7 +689,7 @@ var moment = require("moment");
let event = res.body;
//console.log(event);
// minimum events log for kapla & aava 13 + 4 email_otp = 17
event.length.should.be.aboveOrEqual(13);
event.length.should.be.aboveOrEqual(11);
done(err);
});
});

View File

@@ -45,13 +45,40 @@ var moment = require("moment");
res: {},
},
timelapse: {
url: "/rpc/timelapse_fn",
//url: "/rpc/timelapse_fn",
url: '/rpc/export_logbooks_geojson_linestring_trips_fn',
header: { name: "x-is-public", value: btoa("kapla,public_timelapse,1") },
payload: null,
res: {},
},
timelapse_full: {
//url: "/rpc/timelapse_fn",
url: '/rpc/export_logbooks_geojson_linestring_trips_fn',
header: { name: "x-is-public", value: btoa("kapla,public_timelapse,0") },
payload: null,
res: {},
},
replay_full: {
//url: "/rpc/timelapse_fn",
url: '/rpc/export_logbooks_geojson_point_trips_fn',
header: { name: "x-is-public", value: btoa("kapla,public_timelapse,0") },
payload: null,
res: {},
},
stats_logs: {
url: "/rpc/stats_logs_fn",
header: { name: "x-is-public", value: btoa("kapla,public_stats,0") },
payload: null,
res: {},
},
stats_stays: {
url: "/rpc/stats_stay_fn",
header: { name: "x-is-public", value: btoa("kapla,public_stats,0") },
payload: null,
res: {},
},
export_gpx: {
url: "/rpc/export_logbook_gpx_fn",
url: "/rpc/export_logbook_gpx_trip_fn",
header: { name: "x-is-public", value: btoa("kapla,public_logs,0") },
payload: null,
res: {},
@@ -79,13 +106,39 @@ var moment = require("moment");
res: {},
},
timelapse: {
url: "/rpc/timelapse_fn",
//url: "/rpc/timelapse_fn",
url: '/rpc/export_logbooks_geojson_linestring_trips_fn',
header: { name: "x-is-public", value: btoa("aava,public_timelapse,3") },
payload: null,
res: {},
},
timelapse_full: {
//url: "/rpc/timelapse_fn",
url: '/rpc/export_logbooks_geojson_linestring_trips_fn',
header: { name: "x-is-public", value: btoa("aava,public_timelapse,0") },
payload: null,
res: {},
},
replay_full: {
url: '/rpc/export_logbooks_geojson_point_trips_fn',
header: { name: "x-is-public", value: btoa("aava,public_timelapse,0") },
payload: null,
res: {},
},
stats_logs: {
url: "/rpc/stats_logs_fn",
header: { name: "x-is-public", value: btoa("aava,public_stats,0") },
payload: null,
res: {},
},
stats_stays: {
url: "/rpc/stats_stay_fn",
header: { name: "x-is-public", value: btoa("kapla,public_stats,0") },
payload: null,
res: {},
},
export_gpx: {
url: "/rpc/export_logbook_gpx_fn",
url: "/rpc/export_logbook_gpx_trip_fn",
header: { name: "x-is-public", value: btoa("aava,public_logs,0") },
payload: null,
res: {},
@@ -97,8 +150,8 @@ var moment = require("moment");
request = supertest.agent(test.cname);
request.set("User-Agent", "PostgSail unit tests");
describe("Get JWT api_anonymous", function () {
it("/logs_view, api_anonymous no jwt token", function (done) {
describe("With no JWT as api_anonymous", function () {
it("/logs_view, api_anonymous no jwt token, x-is-public header", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
@@ -106,7 +159,7 @@ var moment = require("moment");
.set(test.logs.header.name, test.logs.header.value)
.set("Accept", "application/json")
.end(function (err, res) {
res.status.should.equal(404);
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
@@ -114,7 +167,7 @@ var moment = require("moment");
done(err);
});
});
it("/log_view, api_anonymous no jwt token", function (done) {
it("/log_view, api_anonymous no jwt token, x-is-public header", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
@@ -130,7 +183,7 @@ var moment = require("moment");
done(err);
});
});
it("/monitoring_view, api_anonymous no jwt token", function (done) {
it("/monitoring_view, api_anonymous no jwt token, x-is-public header", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
@@ -147,7 +200,7 @@ var moment = require("moment");
done(err);
});
});
it("/rpc/timelapse_fn, api_anonymous no jwt token", function (done) {
it("/rpc/export_logbooks_geojson_linestring_trips_fn, api_anonymous no jwt token, x-is-public header", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
@@ -156,11 +209,44 @@ var moment = require("moment");
.set("Accept", "application/json")
.end(function (err, res) {
console.log(res.text);
res.status.should.equal(404);
res.status.should.equal(200); // return 404 as it is not enable in user settings.
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
should.exist(res.body.geojson);
/*
if (res.body.geojson.features == null) { // aava
//res.body.geojson.features.should.not.be.ok();
done(err);
}
res.body.geojson.features.length.should.be.equal(4);
*/
done(err);
});
});
it("/rpc/export_logbooks_geojson_point_trips_fn, api_anonymous no jwt token, x-is-public header", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.post(test.replay_full.url)
.set(test.replay_full.header.name, test.replay_full.header.value)
.set("Accept", "application/json")
.end(function (err, res) {
console.log(res.text);
res.status.should.equal(200); // return 404 as it is not enable in user settings.
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
should.exist(res.body.geojson);
/*
if (res.body.geojson.features == null) { // aava
//res.body.geojson.features.should.not.be.ok();
done(err);
}
res.body.geojson.features.length.should.be.equal(53);
*/
done(err);
});
});

226
tests/index6.js Normal file
View File

@@ -0,0 +1,226 @@
"use strict";
/*
* Unit test #6
* Public/Anonymous access
*
* process.env.PGSAIL_API_URI = from inside the docker
*
* npm install supertest should mocha mochawesome moment
* alias mocha="./node_modules/mocha/bin/_mocha"
* mocha index6.js --reporter mochawesome --reporter-options reportDir=/mnt/postgsail/,reportFilename=report_api.html
*
*/
const sleep = (ms) => new Promise((r) => setTimeout(r, ms));
const supertest = require("supertest");
// Deprecated
const should = require("should");
//const chai = require("chai");
//const should = chai.should();
let request = null;
var moment = require("moment");
// Users Array
[
{
cname: process.env.PGSAIL_API_URI,
name: "PostgSail unit test anonymous, no x-is-public header",
moorages: {
url: "/moorages_view",
payload: null,
res: {},
},
stays: {
url: "/stays_view",
payload: null,
res: {},
},
logs: {
url: "/logs_view",
payload: null,
res: {},
},
log: {
url: "/log_view?id=eq.1",
payload: null,
res: {},
},
monitoring: {
url: "/monitoring_view",
payload: null,
res: {},
},
timelapse: {
url: "/rpc/export_logbooks_geojson_linestring_trips_fn",
payload: null,
res: {},
},
timelapse_full: {
url: "/rpc/export_logbooks_geojson_linestring_trips_fn",
payload: null,
res: {},
},
replay_full: {
url: "/rpc/export_logbooks_geojson_point_trips_fn",
payload: null,
res: {},
},
stats_logs: {
url: "/rpc/stats_logs_fn",
payload: null,
res: {},
},
stats_stays: {
url: "/rpc/stats_stay_fn",
payload: null,
res: {},
},
export_gpx: {
url: "/rpc/export_logbook_gpx_trip_fn",
payload: null,
res: {},
},
},
].forEach(function (test) {
//console.log(`${test.cname}`);
describe(`${test.name}`, function () {
request = supertest.agent(test.cname);
request.set("User-Agent", "PostgSail unit tests");
describe("With no JWT as api_anonymous, no x-is-public", function () {
it("/stays_view, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.get(test.stays.url)
.set("Accept", "application/json")
.end(function (err, res) {
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
res.body.length.should.be.equal(0);
done(err);
});
});
it("/moorages_view, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.get(test.log.url)
.set("Accept", "application/json")
.end(function (err, res) {
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
res.body.length.should.be.equal(0);
done(err);
});
});
it("/logs_view, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.get(test.logs.url)
.set("Accept", "application/json")
.end(function (err, res) {
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
res.body.length.should.be.equal(0);
done(err);
});
});
it("/log_view, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.get(test.log.url)
.set("Accept", "application/json")
.end(function (err, res) {
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
res.body.length.should.be.equal(0);
done(err);
});
});
it("/monitoring_view, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.get(test.monitoring.url)
.set("Accept", "application/json")
.end(function (err, res) {
console.log(res.text);
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
res.body.length.should.be.equal(0);
done(err);
});
});
it("/rpc/export_logbooks_geojson_linestring_trips_fn, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.post(test.timelapse.url)
.set("Accept", "application/json")
.end(function (err, res) {
console.log(res.text);
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
should.exist(res.body.geojson);
done(err);
});
});
it("/rpc/export_logbooks_geojson_point_trips_fn, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.post(test.replay_full.url)
.set("Accept", "application/json")
.end(function (err, res) {
console.log(res.body);
res.status.should.equal(200);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
should.exist(res.body.geojson);
done(err);
});
});
it("/rpc/export_logbook_gpx_trip_fn, api_anonymous no jwt token", function (done) {
// Reset agent so we do not save cookies
request = supertest.agent(test.cname);
request
.post(test.export_gpx.url)
.send({_id: 1})
.set("Accept", "application/json")
.end(function (err, res) {
console.log(res.text)
res.status.should.equal(401);
should.exist(res.header["content-type"]);
should.exist(res.header["server"]);
res.header["content-type"].should.match(new RegExp("json", "g"));
res.header["server"].should.match(new RegExp("postgrest", "g"));
done(err);
});
});
}); // user JWT
}); // OpenAPI description
}); // Users Array

View File

@@ -180,6 +180,18 @@
"status" : "sailing",
"metrics" : {"navigation.log": 17441766, "navigation.trip.log": 80747, "navigation.headingTrue": 3.5972, "navigation.gnss.satellites": 10, "environment.depth.belowKeel": 20.948999999999998, "navigation.magneticVariation": 0.1414, "navigation.speedThroughWater": 3.47, "environment.water.temperature": 313.15, "electrical.batteries.1.current": 192.4, "electrical.batteries.1.voltage": 14.56, "navigation.gnss.antennaAltitude": 0.39, "network.n2k.ngt-1.130356.errorID": 0, "network.n2k.ngt-1.130356.modelID": 14, "environment.depth.belowTransducer": 20.95, "electrical.batteries.1.temperature": 299.82, "environment.depth.transducerToKeel": -0.001, "navigation.gnss.horizontalDilution": 0.8, "network.n2k.ngt-1.130356.ch1.rxLoad": 4, "network.n2k.ngt-1.130356.ch1.txLoad": 0, "network.n2k.ngt-1.130356.ch2.rxLoad": 0, "network.n2k.ngt-1.130356.ch2.txLoad": 64, "network.n2k.ngt-1.130356.ch1.deleted": 0, "network.n2k.ngt-1.130356.ch2.deleted": 0, "network.n2k.ngt-1.130356.ch2Bandwidth": 3, "network.n2k.ngt-1.130356.ch1.bandwidth": 2, "network.n2k.ngt-1.130356.ch1.rxDropped": 0, "network.n2k.ngt-1.130356.ch2.rxDropped": 0, "network.n2k.ngt-1.130356.ch1.rxFiltered": 0, "network.n2k.ngt-1.130356.ch2.rxFiltered": 0, "network.n2k.ngt-1.130356.ch1.rxBandwidth": 4, "network.n2k.ngt-1.130356.ch1.txBandwidth": 0, "network.n2k.ngt-1.130356.ch2.rxBandwidth": 0, "network.n2k.ngt-1.130356.ch2.txBandwidth": 10, "network.n2k.ngt-1.130356.uniChannelCount": 2, "network.n2k.ngt-1.130356.indiChannelCount": 2, "network.n2k.ngt-1.130356.ch1.BufferLoading": 0, "network.n2k.ngt-1.130356.ch2.bufferLoading": 0, "network.n2k.ngt-1.130356.ch1.PointerLoading": 0, "network.n2k.ngt-1.130356.ch2.pointerLoading": 0}
},
{
"time" : "2022-07-31T11:41:28.561Z",
"client_id" : "vessels.urn:mrn:imo:mmsi:987654321",
"latitude" : 59.7163052,
"longitude" : 25.7325741,
"speedoverground" : 9.5,
"courseovergroundtrue" : 198.8,
"windspeedapparent" : 18.0,
"anglespeedapparent" : 41.0,
"status" : "sailing",
"metrics" : {"navigation.log": 17441766, "navigation.trip.log": 80747, "navigation.headingTrue": 3.5972, "navigation.gnss.satellites": 10, "environment.depth.belowKeel": 20.948999999999998, "navigation.magneticVariation": 0.1414, "navigation.speedThroughWater": 3.47, "environment.water.temperature": 313.15, "electrical.batteries.1.current": 192.4, "electrical.batteries.1.voltage": 14.56, "navigation.gnss.antennaAltitude": 0.39, "network.n2k.ngt-1.130356.errorID": 0, "network.n2k.ngt-1.130356.modelID": 14, "environment.depth.belowTransducer": 20.95, "electrical.batteries.1.temperature": 299.82, "environment.depth.transducerToKeel": -0.001, "navigation.gnss.horizontalDilution": 0.8, "network.n2k.ngt-1.130356.ch1.rxLoad": 4, "network.n2k.ngt-1.130356.ch1.txLoad": 0, "network.n2k.ngt-1.130356.ch2.rxLoad": 0, "network.n2k.ngt-1.130356.ch2.txLoad": 64, "network.n2k.ngt-1.130356.ch1.deleted": 0, "network.n2k.ngt-1.130356.ch2.deleted": 0, "network.n2k.ngt-1.130356.ch2Bandwidth": 3, "network.n2k.ngt-1.130356.ch1.bandwidth": 2, "network.n2k.ngt-1.130356.ch1.rxDropped": 0, "network.n2k.ngt-1.130356.ch2.rxDropped": 0, "network.n2k.ngt-1.130356.ch1.rxFiltered": 0, "network.n2k.ngt-1.130356.ch2.rxFiltered": 0, "network.n2k.ngt-1.130356.ch1.rxBandwidth": 4, "network.n2k.ngt-1.130356.ch1.txBandwidth": 0, "network.n2k.ngt-1.130356.ch2.rxBandwidth": 0, "network.n2k.ngt-1.130356.ch2.txBandwidth": 10, "network.n2k.ngt-1.130356.uniChannelCount": 2, "network.n2k.ngt-1.130356.indiChannelCount": 2, "network.n2k.ngt-1.130356.ch1.BufferLoading": 0, "network.n2k.ngt-1.130356.ch2.bufferLoading": 0, "network.n2k.ngt-1.130356.ch1.PointerLoading": 0, "network.n2k.ngt-1.130356.ch2.pointerLoading": 0}
},
{
"time" : "2022-07-31T11:42:28.569Z",
"client_id" : "vessels.urn:mrn:imo:mmsi:987654321",
@@ -573,7 +585,31 @@
"courseovergroundtrue" : 197.6,
"windspeedapparent" : 15.9,
"anglespeedapparent" : 31.0,
"status" : "ais-sart",
"metrics" : {}
},
{
"time" : "2022-07-31T12:14:29.168Z",
"client_id" : "vessels.urn:mrn:imo:mmsi:987654321",
"latitude" : 59.7124174,
"longitude" : 24.7289112,
"speedoverground" : 55.7,
"courseovergroundtrue" : 197.6,
"windspeedapparent" : 15.9,
"anglespeedapparent" : 31.0,
"status" : "anchored",
"metrics" : {"navigation.log": 17442506, "navigation.trip.log": 81321, "navigation.headingTrue": 3.571, "navigation.gnss.satellites": 10, "environment.depth.belowKeel": 13.749, "navigation.magneticVariation": 0.1414, "navigation.speedThroughWater": 3.07, "environment.water.temperature": 313.15, "electrical.batteries.1.current": 43.9, "electrical.batteries.1.voltage": 14.54, "navigation.gnss.antennaAltitude": 2.05, "network.n2k.ngt-1.130356.errorID": 0, "network.n2k.ngt-1.130356.modelID": 14, "environment.depth.belowTransducer": 13.75, "electrical.batteries.1.temperature": 299.82, "environment.depth.transducerToKeel": -0.001, "navigation.gnss.horizontalDilution": 0.8, "network.n2k.ngt-1.130356.ch1.rxLoad": 4, "network.n2k.ngt-1.130356.ch1.txLoad": 0, "network.n2k.ngt-1.130356.ch2.rxLoad": 0, "network.n2k.ngt-1.130356.ch2.txLoad": 40, "network.n2k.ngt-1.130356.ch1.deleted": 0, "network.n2k.ngt-1.130356.ch2.deleted": 0, "network.n2k.ngt-1.130356.ch2Bandwidth": 4, "network.n2k.ngt-1.130356.ch1.bandwidth": 3, "network.n2k.ngt-1.130356.ch1.rxDropped": 0, "network.n2k.ngt-1.130356.ch2.rxDropped": 0, "network.n2k.ngt-1.130356.ch1.rxFiltered": 0, "network.n2k.ngt-1.130356.ch2.rxFiltered": 0, "network.n2k.ngt-1.130356.ch1.rxBandwidth": 5, "network.n2k.ngt-1.130356.ch1.txBandwidth": 0, "network.n2k.ngt-1.130356.ch2.rxBandwidth": 0, "network.n2k.ngt-1.130356.ch2.txBandwidth": 10, "network.n2k.ngt-1.130356.uniChannelCount": 2, "network.n2k.ngt-1.130356.indiChannelCount": 2, "network.n2k.ngt-1.130356.ch1.BufferLoading": 0, "network.n2k.ngt-1.130356.ch2.bufferLoading": 0, "network.n2k.ngt-1.130356.ch1.PointerLoading": 0, "network.n2k.ngt-1.130356.ch2.pointerLoading": 0}
"metrics" : {}
},
{
"time" : "2022-07-31T12:14:29.168Z",
"client_id" : "vessels.urn:mrn:imo:mmsi:987654321",
"latitude" : 59.7124174,
"longitude" : 24.7289112,
"speedoverground" : 5.7,
"courseovergroundtrue" : 197.6,
"windspeedapparent" : 15.9,
"anglespeedapparent" : 31.0,
"status" : "anchored",
"metrics" : {}
}
]}

View File

@@ -633,7 +633,31 @@
"courseovergroundtrue" : 122.0,
"windspeedapparent" : 7.2,
"anglespeedapparent" : 10.0,
"status" : "unknown",
"metrics" : {}
},
{
"time" : "2022-07-30T15:41:28.867Z",
"client_id" : "vessels.urn:mrn:imo:mmsi:123456789",
"latitude" : 59.86,
"longitude" : 23.365766666666666,
"speedoverground" : 60.0,
"courseovergroundtrue" : 122.0,
"windspeedapparent" : 7.2,
"anglespeedapparent" : 10.0,
"status" : "anchored",
"metrics" : {"environment.wind.speedTrue": 0.63, "navigation.speedThroughWater": 3.2255674838104293, "performance.velocityMadeGood": -2.242978345998959, "environment.wind.angleTrueWater": 2.3038346131585485, "environment.depth.belowTransducer": 17.73, "navigation.courseOverGroundMagnetic": 2.129127154994025, "navigation.courseRhumbline.crossTrackError": 0}
"metrics" : {}
},
{
"time" : "2022-07-30T15:41:28.867Z",
"client_id" : "vessels.urn:mrn:imo:mmsi:123456789",
"latitude" : 59.86,
"longitude" : 23.365766666666666,
"speedoverground" : 0.0,
"courseovergroundtrue" : 122.0,
"windspeedapparent" : 7.2,
"anglespeedapparent" : 10.0,
"status" : "anchored",
"metrics" : {}
}
]}

View File

@@ -8,5 +8,8 @@
"moment": "^2.29.4",
"should": "^13.2.3",
"supertest": "^6.3.3"
},
"devDependencies": {
"schemalint": "^2.0.5"
}
}

View File

@@ -18,3 +18,10 @@ SELECT api.ispublic_fn('kapla', 'public_logs', 1);
SELECT api.ispublic_fn('kapla', 'public_logs', 3);
SELECT api.ispublic_fn('kapla', 'public_monitoring');
SELECT api.ispublic_fn('kapla', 'public_timelapse');
SELECT api.ispublic_fn('aava', 'public_test');
SELECT api.ispublic_fn('aava', 'public_logs_list');
SELECT api.ispublic_fn('aava', 'public_logs', 1);
SELECT api.ispublic_fn('aava', 'public_logs', 3);
SELECT api.ispublic_fn('aava', 'public_monitoring');
SELECT api.ispublic_fn('aava', 'public_timelapse');

View File

@@ -15,6 +15,27 @@ ispublic_fn | f
-[ RECORD 1 ]--
ispublic_fn | t
-[ RECORD 1 ]--
ispublic_fn | f
-[ RECORD 1 ]--
ispublic_fn | t
-[ RECORD 1 ]--
ispublic_fn | t
-[ RECORD 1 ]--
ispublic_fn | f
-[ RECORD 1 ]--
ispublic_fn | f
-[ RECORD 1 ]--
ispublic_fn | f
-[ RECORD 1 ]--
ispublic_fn | t
-[ RECORD 1 ]--
ispublic_fn | t

View File

@@ -17,26 +17,30 @@ SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
\echo 'Insert new api.logbook for badges'
INSERT INTO api.logbook
(id, active, "name", "_from", "_from_lat", "_from_lng", "_to", "_to_lat", "_to_lng", track_geom, track_geog, track_geojson, "_from_time", "_to_time", distance, duration, avg_speed, max_speed, max_wind_speed, notes, vessel_id)
(id, active, "name", "_from", "_from_lat", "_from_lng", "_to", "_to_lat", "_to_lng", trip, "_from_time", "_to_time", distance, duration, avg_speed, max_speed, max_wind_speed, notes, vessel_id)
OVERRIDING SYSTEM VALUE VALUES
(nextval('api.logbook_id_seq'), false, 'Tropics Zone', NULL, NULL, NULL, NULL, NULL, NULL, 'SRID=4326;LINESTRING (-63.151124640791096 14.01074681627324, -77.0912026418618 12.870995731013664)'::public.geometry, NULL, NULL, NOW(), NOW(), 123, NULL, NULL, NULL, NULL, NULL, current_setting('vessel.id', false)),
(nextval('api.logbook_id_seq'), false, 'Alaska Zone', NULL, NULL, NULL, NULL, NULL, NULL, 'SRID=4326;LINESTRING (-143.5773697471158 59.4404631255976, -152.35402122385003 56.58243132943173)'::public.geometry, NULL, NULL, NOW(), NOW(), 1234, NULL, NULL, NULL, NULL, NULL, current_setting('vessel.id', false));
(nextval('api.logbook_id_seq'), false, 'Tropics Zone', NULL, NULL, NULL, NULL, NULL, NULL, 'SRID=4326;[Point(-63.151124640791096 14.01074681627324)@2025-01-01, Point(-77.0912026418618 12.870995731013664)@2025-01-02]'::public.tgeogpoint, NOW(), NOW(), 123, NULL, NULL, NULL, NULL, NULL, current_setting('vessel.id', false)),
(nextval('api.logbook_id_seq'), false, 'Alaska Zone', NULL, NULL, NULL, NULL, NULL, NULL, 'SRID=4326;[Point(-143.5773697471158 59.4404631255976)@2025-01-01, Point(-152.35402122385003 56.58243132943173)@2025-01-02]'::public.tgeogpoint, NOW(), NOW(), 1234, NULL, NULL, NULL, NULL, NULL, current_setting('vessel.id', false));
-- Transform static geometry LINESTRING to mobilitydb
-- 'SRID=4326;LINESTRING (-63.151124640791096 14.01074681627324, -77.0912026418618 12.870995731013664)'::public.geometry
-- 'SRID=4326;LINESTRING (-143.5773697471158 59.4404631255976, -152.35402122385003 56.58243132943173)'::public.geometry
--SELECT ST_AsGeoJSON('SRID=4326;LINESTRING (-63.151124640791096 14.01074681627324, -77.0912026418618 12.870995731013664)'::public.geometry);
--SELECT ST_AsGeoJSON(trajectory('SRID=4326;[Point(-63.151124640791096 14.01074681627324)@2025-01-01, Point(-77.0912026418618 12.870995731013664)@2025-01-02]'::public.tgeogpoint));
\echo 'Set config'
SELECT set_config('user.email', 'demo+kapla@openplotter.cloud', false);
--SELECT set_config('vessel.client_id', 'vessels.urn:mrn:imo:mmsi:123456789', false);
\echo 'Process badge'
SELECT badges_logbook_fn(5);
SELECT badges_logbook_fn(6);
SELECT badges_geom_fn(5);
SELECT badges_geom_fn(6);
SELECT badges_logbook_fn(5,NOW()::TEXT);
SELECT badges_logbook_fn(6,NOW()::TEXT);
SELECT badges_geom_fn(5,NOW()::TEXT);
SELECT badges_geom_fn(6,NOW()::TEXT);
\echo 'Check badges for user'
\echo 'Check badges for all users'
SELECT jsonb_object_keys ( a.preferences->'badges' ) FROM auth.accounts a;
\echo 'Check details from vessel_id kapla'
--SELECT get_user_settings_from_vesselid_fn('vessels.urn:mrn:imo:mmsi:123456789'::TEXT);
SELECT
json_build_object(
'boat', v.name,
@@ -53,10 +57,10 @@ SELECT
\echo 'Insert new api.moorages for badges'
INSERT INTO api.moorages
(id,"name",country,stay_code,stay_duration,reference_count,latitude,longitude,geog,home_flag,notes,vessel_id)
(id,"name",country,stay_code,latitude,longitude,geog,home_flag,notes,vessel_id)
OVERRIDING SYSTEM VALUE VALUES
(8,'Badge Mooring Pro',NULL,3,'11 days 00:39:56.418',1,NULL,NULL,NULL,false,'Badge Mooring Pro',current_setting('vessel.id', false)),
(9,'Badge Anchormaster',NULL,2,'26 days 00:49:56.418',1,NULL,NULL,NULL,false,'Badge Anchormaster',current_setting('vessel.id', false));
(8,'Badge Mooring Pro',NULL,3,NULL,NULL,NULL,false,'Badge Mooring Pro',current_setting('vessel.id', false)),
(9,'Badge Anchormaster',NULL,2,NULL,NULL,NULL,false,'Badge Anchormaster',current_setting('vessel.id', false));
\echo 'Set config'
SELECT set_config('user.email', 'demo+aava@openplotter.cloud', false);
@@ -68,6 +72,9 @@ SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
\echo 'Process badge'
SELECT badges_moorages_fn();
\echo 'Check badges for all users'
SELECT jsonb_object_keys ( a.preferences->'badges' ) FROM auth.accounts a;
\echo 'Check details from vessel_id aava'
--SELECT get_user_settings_from_vesselid_fn('vessels.urn:mrn:imo:mmsi:787654321'::TEXT);
SELECT

View File

@@ -27,7 +27,7 @@ badges_geom_fn |
-[ RECORD 1 ]--+-
badges_geom_fn |
Check badges for user
Check badges for all users
-[ RECORD 1 ]-----+------------------
jsonb_object_keys | Helmsman
-[ RECORD 2 ]-----+------------------
@@ -76,6 +76,38 @@ Process badge
-[ RECORD 1 ]------+-
badges_moorages_fn |
Check badges for all users
-[ RECORD 1 ]-----+------------------
jsonb_object_keys | Helmsman
-[ RECORD 2 ]-----+------------------
jsonb_object_keys | Wake Maker
-[ RECORD 3 ]-----+------------------
jsonb_object_keys | Balearic Sea
-[ RECORD 4 ]-----+------------------
jsonb_object_keys | Stormtrooper
-[ RECORD 5 ]-----+------------------
jsonb_object_keys | Gulf of Finland
-[ RECORD 6 ]-----+------------------
jsonb_object_keys | Helmsman
-[ RECORD 7 ]-----+------------------
jsonb_object_keys | Wake Maker
-[ RECORD 8 ]-----+------------------
jsonb_object_keys | Club Alaska
-[ RECORD 9 ]-----+------------------
jsonb_object_keys | Stormtrooper
-[ RECORD 10 ]----+------------------
jsonb_object_keys | Captain Award
-[ RECORD 11 ]----+------------------
jsonb_object_keys | Caribbean Sea
-[ RECORD 12 ]----+------------------
jsonb_object_keys | Gulf of Alaska
-[ RECORD 13 ]----+------------------
jsonb_object_keys | Gulf of Finland
-[ RECORD 14 ]----+------------------
jsonb_object_keys | Navigator Award
-[ RECORD 15 ]----+------------------
jsonb_object_keys | Tropical Traveler
Check details from vessel_id aava
-[ RECORD 1 ]-+--------------------------------------------------------------------------------------------------------------
user_settings | {"boat" : "aava", "recipient" : "first_aava", "email" : "demo+aava@openplotter.cloud", "pushover_key" : null}

View File

@@ -21,23 +21,27 @@ SELECT v.vessel_id as "vessel_id" FROM auth.vessels v WHERE v.owner_email = 'dem
--\echo :"vessel_id"
SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
-- user_role
SET ROLE user_role;
-- Test logbook for user
\echo 'logbook'
SELECT count(*) FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
\echo 'logbook'
SELECT name,_from_time IS NOT NULL AS _from_time,_to_time IS NOT NULL AS _to_time, track_geojson IS NOT NULL AS track_geojson, track_geom, distance,duration,avg_speed,max_speed,max_wind_speed,notes,extra FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
--SELECT name,_from_time IS NOT NULL AS _from_time,_to_time IS NOT NULL AS _to_time, track_geojson IS NOT NULL AS track_geojson, trajectory(trip)::geometry as track_geom, distance,duration,round(avg_speed::NUMERIC,6),max_speed,max_wind_speed,notes,extra FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
SELECT name,_from_time IS NOT NULL AS _from_time,_to_time IS NOT NULL AS _to_time, api.export_logbook_geojson_trip_fn(id) IS NOT NULL AS track_geojson, trajectory(trip)::geometry as track_geom, distance,duration,round(avg_speed::NUMERIC,6),max_speed,max_wind_speed,notes,extra FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
-- Test stays for user
\echo 'stays'
SELECT count(*) FROM api.stays WHERE vessel_id = current_setting('vessel.id', false);
\echo 'stays'
SELECT active,name,geog,stay_code FROM api.stays WHERE vessel_id = current_setting('vessel.id', false);
SELECT active,name IS NOT NULL AS name,geog,stay_code FROM api.stays WHERE vessel_id = current_setting('vessel.id', false);
-- Test event logs view for user
\echo 'eventlogs_view'
SELECT count(*) from api.eventlogs_view;
-- Test event logs view for user
-- Test stats logs view for user
\echo 'stats_logs_fn'
SELECT api.stats_logs_fn(null, null) INTO stats_jsonb;
SELECT stats_logs_fn->'name' AS name,
@@ -64,8 +68,35 @@ SELECT extra FROM api.logbook l WHERE id = 1 AND vessel_id = current_setting('ve
SELECT api.update_logbook_observations_fn(1, '{"observations":{"cloudCoverage":1}}'::TEXT);
SELECT extra FROM api.logbook l WHERE id = 1 AND vessel_id = current_setting('vessel.id', false);
\echo 'add tags to logbook'
SELECT extra FROM api.logbook l WHERE id = 1 AND vessel_id = current_setting('vessel.id', false);
SELECT api.update_logbook_observations_fn(1, '{"tags": ["tag_name"]}'::TEXT);
SELECT extra FROM api.logbook l WHERE id = 1 AND vessel_id = current_setting('vessel.id', false);
\echo 'Check logbook geojson LineString properties'
WITH logbook_tbl AS (
SELECT api.logbook_update_geojson_trip_fn(id) AS geojson
FROM api.logbook WHERE id = 1 AND vessel_id = current_setting('vessel.id', false)
)
SELECT jsonb_object_keys(jsonb_path_query(geojson, '$.features[0].properties'))
FROM logbook_tbl;
\echo 'Check logbook geojson Point properties'
WITH logbook_tbl AS (
SELECT api.logbook_update_geojson_trip_fn(id) AS geojson
FROM api.logbook WHERE id = 1 AND vessel_id = current_setting('vessel.id', false)
)
SELECT jsonb_object_keys(jsonb_path_query(geojson, '$.features[1].properties'))
FROM logbook_tbl;
-- Check export
--\echo 'check logbook export fn'
\echo 'Check logbook export fn'
--SELECT api.export_logbook_geojson_fn(1);
--SELECT api.export_logbook_gpx_fn(1);
--SELECT api.export_logbook_kml_fn(1);
SELECT api.export_logbook_gpx_trip_fn(1) IS NOT NULL AS gpx_trip;
SELECT api.export_logbook_kml_trip_fn(1) IS NOT NULL AS kml_trip;
-- Check history
--\echo 'monitoring history fn'
--select api.monitoring_history_fn();
--select api.monitoring_history_fn('24');

View File

@@ -11,37 +11,38 @@ user_id | t
-[ RECORD 1 ]
vessel_id | t
SET
logbook
-[ RECORD 1 ]
count | 2
logbook
-[ RECORD 1 ]--+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-[ RECORD 1 ]--+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Pojoviken to Norra hamnen
_from_time | t
_to_time | t
track_geojson | t
track_geom | 0102000020E61000001C000000B0DEBBE0E68737404DA938FBF0094E40B0DEBBE0E68737404DA938FBF0094E4020D26F5F0786374030BB270F0B094E400C6E7ED60F843740AA60545227084E40D60FC48C03823740593CE27D42074E407B39D9F322803740984C158C4A064E4091ED7C3F357E3740898BB63D54054E40A8A1208B477C37404BA3DC9059044E404C5CB4EDA17A3740C4F856115B034E40A9A44E4013793740D8F0F44A59024E40E4839ECDAA773740211FF46C56014E405408D147067637408229F03B73004E40787AA52C43743740F90FE9B7AFFF4D40F8098D4D18723740C217265305FF4D4084E82303537037409A2D464AA0FE4D4022474DCE636F37402912396A72FE4D408351499D806E374088CFB02B40FE4D4076711B0DE06D3740B356C7040FFE4D404EAC66B0BC6E374058A835CD3BFE4D40D7A3703D0A6F3740D3E10EC15EFE4D4087602F277B6E3740A779C7293AFE4D4087602F277B6E3740A779C7293AFE4D402063EE5A426E3740B5A679C729FE4D40381DEE10EC6D37409ECA7C1A0AFE4D40E2C46A06CB6B37400A43F7BF36FD4D4075931804566E3740320BDAD125FD4D409A2D464AA06E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D40
track_geom | 0102000020E61000001A000000B0DEBBE0E68737404DA938FBF0094E4020D26F5F0786374030BB270F0B094E400C6E7ED60F843740AA60545227084E40D60FC48C03823740593CE27D42074E407B39D9F322803740984C158C4A064E4091ED7C3F357E3740898BB63D54054E40A8A1208B477C37404BA3DC9059044E404C5CB4EDA17A3740C4F856115B034E40A9A44E4013793740D8F0F44A59024E40E4839ECDAA773740211FF46C56014E405408D147067637408229F03B73004E40787AA52C43743740F90FE9B7AFFF4D40F8098D4D18723740C217265305FF4D4084E82303537037409A2D464AA0FE4D4022474DCE636F37402912396A72FE4D408351499D806E374088CFB02B40FE4D4076711B0DE06D3740B356C7040FFE4D404EAC66B0BC6E374058A835CD3BFE4D40D7A3703D0A6F3740D3E10EC15EFE4D4087602F277B6E3740A779C7293AFE4D402063EE5A426E3740B5A679C729FE4D40381DEE10EC6D37409ECA7C1A0AFE4D40E2C46A06CB6B37400A43F7BF36FD4D4075931804566E3740320BDAD125FD4D409A2D464AA06E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D40
distance | 7.6447
duration | PT27M
avg_speed | 3.6357142857142852
round | 3.635714
max_speed | 6.1
max_wind_speed | 22.1
notes | new log note
extra | {"metrics": {"propulsion.main.runTime": 10}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}}
-[ RECORD 2 ]--+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
notes |
extra | {"metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}, "avg_wind_speed": 14.549999999999999}
-[ RECORD 2 ]--+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Norra hamnen to Ekenäs
_from_time | t
_to_time | t
track_geojson | t
track_geom | 0102000020E610000015000000029A081B9E6E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D404806A6C0EF6C3740DA1B7C6132FD4D40FE65F7E461693740226C787AA5FC4D407DD3E10EC1663740B29DEFA7C6FB4D40898BB63D5465374068479724BCFA4D409A5271F6E1633740B6847CD0B3F94D40431CEBE236623740E9263108ACF84D402C6519E2585F37407E678EBFC7F74D4096218E75715B374027C5B45C23F74D402AA913D044583740968DE1C46AF64D405AF5B9DA8A5537407BEF829B9FF54D407449C2ABD253374086C954C1A8F44D407D1A0AB278543740F2B0506B9AF34D409D11A5BDC15737406688635DDCF24D4061C3D32B655937402CAF6F3ADCF14D408988888888583740B3319C58CDF04D4021FAC8C0145837408C94405DB7EF4D40B8F9593F105B37403DC0804BEDEE4D40DE4C5FE2A25D3740AE47E17A14EE4D40DE4C5FE2A25D3740AE47E17A14EE4D40
track_geom | 0102000020E610000013000000029A081B9E6E37404A5658830AFD4D404806A6C0EF6C3740DA1B7C6132FD4D40FE65F7E461693740226C787AA5FC4D407DD3E10EC1663740B29DEFA7C6FB4D40898BB63D5465374068479724BCFA4D409A5271F6E1633740B6847CD0B3F94D40431CEBE236623740E9263108ACF84D402C6519E2585F37407E678EBFC7F74D4096218E75715B374027C5B45C23F74D402AA913D044583740968DE1C46AF64D405AF5B9DA8A5537407BEF829B9FF54D407449C2ABD253374086C954C1A8F44D407D1A0AB278543740F2B0506B9AF34D409D11A5BDC15737406688635DDCF24D4061C3D32B655937402CAF6F3ADCF14D408988888888583740B3319C58CDF04D4021FAC8C0145837408C94405DB7EF4D40B8F9593F105B37403DC0804BEDEE4D40DE4C5FE2A25D3740AE47E17A14EE4D40
distance | 8.8968
duration | PT20M
avg_speed | 5.4523809523809526
round | 5.452381
max_speed | 6.5
max_wind_speed | 37.2
notes |
extra | {"metrics": {"propulsion.main.runTime": 11}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}}
extra | {"metrics": {"propulsion.main.runTime": "PT11S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}, "avg_wind_speed": 10.476190476190478}
stays
-[ RECORD 1 ]
@@ -50,39 +51,39 @@ count | 3
stays
-[ RECORD 1 ]-------------------------------------------------
active | t
name |
name | f
geog |
stay_code | 2
-[ RECORD 2 ]-------------------------------------------------
active | f
name | 0 days stay at Pojoviken in November 2023
name | t
geog | 0101000020E6100000B0DEBBE0E68737404DA938FBF0094E40
stay_code | 2
-[ RECORD 3 ]-------------------------------------------------
active | f
name | 0 days stay at Norra hamnen in November 2023
name | t
geog | 0101000020E6100000029A081B9E6E37404A5658830AFD4D40
stay_code | 4
eventlogs_view
-[ RECORD 1 ]
count | 11
count | 12
stats_logs_fn
SELECT 1
-[ RECORD 1 ]+----------
-[ RECORD 1 ]+--------
name | "kapla"
count | 4
max_speed | 7.1
count | 2
max_speed | 6.5
max_distance | 8.8968
max_duration | "PT1H11M"
?column? | 3
?column? | 30.1154
?column? | "PT2H43M"
?column? | 44.2
max_duration | "PT27M"
?column? | 2
?column? | 16.5415
?column? | "PT47M"
?column? | 37.2
?column? | 2
?column? | 1
?column? | 2
?column? | 4
?column? | 4
first_date | t
last_date | t
@@ -91,12 +92,83 @@ DROP TABLE
stats_logs_fn |
update_logbook_observations_fn
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": 10}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}}
-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}, "avg_wind_speed": 14.549999999999999}
-[ RECORD 1 ]------------------+--
update_logbook_observations_fn | t
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": 10}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}}
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}, "avg_wind_speed": 14.549999999999999}
add tags to logbook
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}, "avg_wind_speed": 14.549999999999999}
-[ RECORD 1 ]------------------+--
update_logbook_observations_fn | t
-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
extra | {"tags": ["tag_name"], "metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}, "avg_wind_speed": 14.549999999999999}
Check logbook geojson LineString properties
-[ RECORD 1 ]-----+-----------------
jsonb_object_keys | id
-[ RECORD 2 ]-----+-----------------
jsonb_object_keys | _to
-[ RECORD 3 ]-----+-----------------
jsonb_object_keys | name
-[ RECORD 4 ]-----+-----------------
jsonb_object_keys | _from
-[ RECORD 5 ]-----+-----------------
jsonb_object_keys | notes
-[ RECORD 6 ]-----+-----------------
jsonb_object_keys | times
-[ RECORD 7 ]-----+-----------------
jsonb_object_keys | _to_time
-[ RECORD 8 ]-----+-----------------
jsonb_object_keys | distance
-[ RECORD 9 ]-----+-----------------
jsonb_object_keys | duration
-[ RECORD 10 ]----+-----------------
jsonb_object_keys | avg_speed
-[ RECORD 11 ]----+-----------------
jsonb_object_keys | max_speed
-[ RECORD 12 ]----+-----------------
jsonb_object_keys | _from_time
-[ RECORD 13 ]----+-----------------
jsonb_object_keys | _to_moorage_id
-[ RECORD 14 ]----+-----------------
jsonb_object_keys | avg_wind_speed
-[ RECORD 15 ]----+-----------------
jsonb_object_keys | max_wind_speed
-[ RECORD 16 ]----+-----------------
jsonb_object_keys | _from_moorage_id
Check logbook geojson Point properties
-[ RECORD 1 ]-----+-------
jsonb_object_keys | cog
-[ RECORD 2 ]-----+-------
jsonb_object_keys | sog
-[ RECORD 3 ]-----+-------
jsonb_object_keys | twa
-[ RECORD 4 ]-----+-------
jsonb_object_keys | twd
-[ RECORD 5 ]-----+-------
jsonb_object_keys | tws
-[ RECORD 6 ]-----+-------
jsonb_object_keys | time
-[ RECORD 7 ]-----+-------
jsonb_object_keys | trip
-[ RECORD 8 ]-----+-------
jsonb_object_keys | notes
-[ RECORD 9 ]-----+-------
jsonb_object_keys | status
Check logbook export fn
-[ RECORD 1 ]
gpx_trip | t
-[ RECORD 1 ]
kml_trip | t

View File

@@ -12,9 +12,19 @@ select current_database();
\x on
-- Check the number of process pending
-- Should be 22
\echo 'Check the number of process pending'
-- Should be 24
SELECT count(*) as jobs from public.process_queue pq where pq.processed is null;
--set role scheduler
-- Switch to the scheduler role
--\echo 'Switch to the scheduler role'
--SET ROLE scheduler;
-- Should be 24
SELECT count(*) as jobs from public.process_queue pq where pq.processed is null;
-- Run the cron jobs
SELECT public.run_cron_jobs();
-- Check any pending job
SELECT count(*) as any_pending_jobs from public.process_queue pq where pq.processed is null;
-- Check the number of metrics entries
\echo 'Check the number of metrics entries'
SELECT count(*) as metrics_count from api.metrics;

View File

@@ -5,12 +5,20 @@
You are now connected to database "signalk" as user "username".
Expanded display is on.
Check the number of process pending
-[ RECORD 1 ]
jobs | 24
jobs | 26
-[ RECORD 1 ]
jobs | 26
-[ RECORD 1 ]-+-
run_cron_jobs |
-[ RECORD 1 ]----+--
any_pending_jobs | 0
any_pending_jobs | 2
Check the number of metrics entries
-[ RECORD 1 ]-+----
metrics_count | 173

View File

@@ -15,21 +15,21 @@ select current_database();
-- grafana_auth
SET ROLE grafana_auth;
\echo 'ROLE grafana_auth current_setting'
SELECT current_user, current_setting('user.email', true), current_setting('vessel.client_id', true), current_setting('vessel.id', true);
SELECT current_user, current_setting('user.email', true), current_setting('vessel.id', true);
--SELECT a.pass,v.name,m.client_id FROM auth.accounts a JOIN auth.vessels v ON a.email = 'demo+kapla@openplotter.cloud' AND a.role = 'user_role' AND cast(a.preferences->>'email_valid' as Boolean) = True AND v.owner_email = a.email JOIN api.metadata m ON m.vessel_id = v.vessel_id;
--SELECT a.pass,v.name,m.client_id FROM auth.accounts a JOIN auth.vessels v ON a.email = 'demo+kapla@openplotter.cloud' AND a.role = 'user_role' AND v.owner_email = a.email JOIN api.metadata m ON m.vessel_id = v.vessel_id;
\echo 'link vessel and user based on current_setting'
SELECT v.name,m.client_id FROM auth.accounts a JOIN auth.vessels v ON a.role = 'user_role' AND v.owner_email = a.email JOIN api.metadata m ON m.vessel_id = v.vessel_id;
SELECT v.name, m.vessel_id IS NOT NULL AS vessel_id FROM auth.accounts a JOIN auth.vessels v ON a.role = 'user_role' AND v.owner_email = a.email JOIN api.metadata m ON m.vessel_id = v.vessel_id ORDER BY a.id DESC;
\echo 'auth.accounts details'
SELECT a.user_id IS NOT NULL AS user_id, a.email, a.first, a.last, a.pass IS NOT NULL AS pass, a.role, a.preferences->'telegram'->'chat' AS telegram, a.preferences->'pushover_user_key' AS pushover_user_key FROM auth.accounts AS a;
SELECT a.user_id IS NOT NULL AS user_id, a.email, a.first, a.last, a.pass IS NOT NULL AS pass, a.role, a.preferences->'telegram'->'chat' AS telegram, a.preferences->'pushover_user_key' AS pushover_user_key FROM auth.accounts AS a ORDER BY a.id DESC;
\echo 'auth.vessels details'
--SELECT 'SELECT ' || STRING_AGG('v.' || column_name, ', ') || ' FROM auth.vessels AS v' FROM information_schema.columns WHERE table_name = 'vessels' AND table_schema = 'auth' AND column_name NOT IN ('created_at', 'updated_at');
SELECT v.vessel_id IS NOT NULL AS vessel_id, v.owner_email, v.mmsi, v.name, v.role FROM auth.vessels AS v;
\echo 'api.metadata details'
--
SELECT m.id, m.name, m.mmsi, m.client_id, m.length, m.beam, m.height, m.ship_type, m.plugin_version, m.signalk_version, m.time IS NOT NULL AS time, m.active FROM api.metadata AS m;
SELECT vessel_id IS NOT NULL AS vessel_id_not_null, m.name, m.mmsi, m.length, m.beam, m.height, m.ship_type, m.plugin_version, m.signalk_version, m.time IS NOT NULL AS time, m.active, configuration IS NOT NULL AS configuration_not_null, available_keys FROM api.metadata AS m ORDER BY m.name DESC;
--
-- grafana
@@ -46,16 +46,16 @@ SELECT v.vessel_id as "vessel_id" FROM auth.vessels v WHERE v.owner_email = 'dem
SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
--SELECT current_user, current_setting('user.email', true), current_setting('vessel.client_id', true), current_setting('vessel.id', true);
SELECT current_user, current_setting('user.email', true), current_setting('vessel.client_id', true);
SELECT current_user, current_setting('user.email', true);
SELECT v.name AS __text, m.client_id AS __value FROM auth.vessels v JOIN api.metadata m ON v.owner_email = 'demo+kapla@openplotter.cloud' and m.vessel_id = v.vessel_id;
SELECT v.name AS __text, m.vessel_id IS NOT NULL AS __value FROM auth.vessels v JOIN api.metadata m ON v.owner_email = 'demo+kapla@openplotter.cloud' and m.vessel_id = v.vessel_id;
\echo 'auth.vessels details'
--SELECT * FROM auth.vessels v;
SELECT v.vessel_id IS NOT NULL AS vessel_id, v.owner_email, v.mmsi, v.name, v.role FROM auth.vessels AS v;
--SELECT * FROM api.metadata m;
\echo 'api.metadata details'
SELECT m.id, m.name, m.mmsi, m.client_id, m.length, m.beam, m.height, m.ship_type, m.plugin_version, m.signalk_version, m.time IS NOT NULL AS time, m.active FROM api.metadata AS m;
SELECT vessel_id IS NOT NULL AS vessel_id_not_null, m.name, m.mmsi, m.length, m.beam, m.height, m.ship_type, m.plugin_version, m.signalk_version, m.time IS NOT NULL AS time, m.active, configuration IS NOT NULL AS configuration_not_null, available_keys FROM api.metadata AS m;
\echo 'api.logs_view'
--SELECT * FROM api.logbook l;
@@ -65,15 +65,20 @@ SELECT l.id, l.name, l.from, l.to, l.distance, l.duration, l._from_moorage_id, l
\echo 'api.stays'
--SELECT * FROM api.stays s;
SELECT m.id, m.vessel_id IS NOT NULL AS vessel_id, m.moorage_id, m.active, m.name, m.latitude, m.longitude, m.geog, m.arrived IS NOT NULL AS arrived, m.departed IS NOT NULL AS departed, m.duration, m.stay_code, m.notes FROM api.stays AS m;
SELECT m.id, m.vessel_id IS NOT NULL AS vessel_id, m.moorage_id, m.active, m.name IS NOT NULL AS name, m.latitude, m.longitude, m.geog, m.arrived IS NOT NULL AS arrived, m.departed IS NOT NULL AS departed, m.duration, m.stay_code, m.notes FROM api.stays AS m;
\echo 'stays_view'
\echo 'api.stays_view'
--SELECT * FROM api.stays_view s;
SELECT m.id, m.name IS NOT NULL AS name, m.moorage, m.moorage_id, m.duration, m.stayed_at, m.stayed_at_id, m.arrived IS NOT NULL AS arrived, m.departed IS NOT NULL AS departed, m.notes FROM api.stays_view AS m;
\echo 'api.moorages'
--SELECT * FROM api.moorages m;
SELECT m.id, m.vessel_id IS NOT NULL AS vessel_id, m.name, m.country, m.stay_code, m.stay_duration, m.reference_count, m.latitude, m.longitude, m.geog, m.home_flag, m.notes FROM api.moorages AS m;
--SELECT m.id, m.vessel_id IS NOT NULL AS vessel_id, m.name, m.country, m.stay_code, m.stay_duration, m.reference_count, m.latitude, m.longitude, m.geog, m.home_flag, m.notes FROM api.moorages AS m;
SELECT m.id, m.vessel_id IS NOT NULL AS vessel_id, m.name, m.country, m.stay_code, m.latitude, m.longitude, m.geog, m.home_flag, m.notes FROM api.moorages AS m;
\echo 'api.moorages_view'
SELECT * FROM api.moorages_view s;
SELECT * FROM api.moorages_view;
\echo 'api.moorage_view'
--SELECT * FROM api.moorage_view;
SELECT m.id, m.name, default_stay, m.latitude, m.longitude, m.geog, m.home, m.notes, logs_count, stays_count, stay_first_seen_id, stay_last_seen_id, stay_first_seen IS NOT NULL AS stay_first_seen, stay_last_seen IS NOT NULL AS stay_last_seen FROM api.moorage_view m;

View File

@@ -11,15 +11,14 @@ ROLE grafana_auth current_setting
current_user | grafana_auth
current_setting |
current_setting |
current_setting |
link vessel and user based on current_setting
-[ RECORD 1 ]----------------------------------------------------------------
-[ RECORD 1 ]----
name | aava
client_id | vessels.urn:mrn:imo:mmsi:787654321
-[ RECORD 2 ]----------------------------------------------------------------
vessel_id | t
-[ RECORD 2 ]----
name | kapla
client_id | vessels.urn:mrn:signalk:uuid:5b4f7543-7153-4840-b139-761310b242fd
vessel_id | t
auth.accounts details
-[ RECORD 1 ]-----+-----------------------------
@@ -56,32 +55,34 @@ name | aava
role | vessel_role
api.metadata details
-[ RECORD 1 ]---+------------------------------------------------------------------
id | 1
name | kapla
mmsi | 123456789
client_id | vessels.urn:mrn:signalk:uuid:5b4f7543-7153-4840-b139-761310b242fd
length | 12
beam | 10
height | 24
ship_type | 36
plugin_version | 0.0.1
signalk_version | signalk_version
time | t
active | t
-[ RECORD 2 ]---+------------------------------------------------------------------
id | 2
name | aava
mmsi | 787654321
client_id | vessels.urn:mrn:imo:mmsi:787654321
length | 12
beam | 10
height | 24
ship_type | 37
plugin_version | 1.0.2
signalk_version | 1.20.0
time | t
active | t
-[ RECORD 1 ]----------+----------------
vessel_id_not_null | t
name | kapla
mmsi | 123456789
length | 12
beam | 10
height | 24
ship_type | 36
plugin_version | 0.0.1
signalk_version | signalk_version
time | t
active | t
configuration_not_null | t
available_keys |
-[ RECORD 2 ]----------+----------------
vessel_id_not_null | t
name | aava
mmsi | 787654321
length | 12
beam | 10
height | 24
ship_type | 37
plugin_version | 1.0.2
signalk_version | 1.20.0
time | t
active | t
configuration_not_null | t
available_keys | []
SET
ROLE grafana current_setting
@@ -93,11 +94,10 @@ vessel_id | t
-[ RECORD 1 ]---+-----------------------------
current_user | grafana
current_setting | demo+kapla@openplotter.cloud
current_setting |
-[ RECORD 1 ]--------------------------------------------------------------
-[ RECORD 1 ]--
__text | kapla
__value | vessels.urn:mrn:signalk:uuid:5b4f7543-7153-4840-b139-761310b242fd
__value | t
auth.vessels details
-[ RECORD 1 ]-----------------------------
@@ -108,19 +108,20 @@ name | kapla
role | vessel_role
api.metadata details
-[ RECORD 1 ]---+------------------------------------------------------------------
id | 1
name | kapla
mmsi | 123456789
client_id | vessels.urn:mrn:signalk:uuid:5b4f7543-7153-4840-b139-761310b242fd
length | 12
beam | 10
height | 24
ship_type | 36
plugin_version | 0.0.1
signalk_version | signalk_version
time | t
active | t
-[ RECORD 1 ]----------+----------------
vessel_id_not_null | t
name | kapla
mmsi | 123456789
length | 12
beam | 10
height | 24
ship_type | 36
plugin_version | 0.0.1
signalk_version | signalk_version
time | t
active | t
configuration_not_null | t
available_keys |
api.logs_view
-[ RECORD 1 ]----+-----------------------
@@ -148,7 +149,7 @@ id | 3
vessel_id | t
moorage_id |
active | t
name |
name | f
latitude | 59.86
longitude | 23.365766666666666
geog |
@@ -162,7 +163,7 @@ id | 1
vessel_id | t
moorage_id | 1
active | f
name | patch stay name 3
name | t
latitude | 60.077666666666666
longitude | 23.530866666666668
geog | 0101000020E6100000B0DEBBE0E68737404DA938FBF0094E40
@@ -176,7 +177,7 @@ id | 2
vessel_id | t
moorage_id | 2
active | f
name | 0 days stay at Norra hamnen in November 2023
name | t
latitude | 59.97688333333333
longitude | 23.4321
geog | 0101000020E6100000029A081B9E6E37404A5658830AFD4D40
@@ -186,7 +187,7 @@ duration | PT2M
stay_code | 4
notes |
stays_view
api.stays_view
-[ RECORD 1 ]+---------------------
id | 2
name | t
@@ -211,45 +212,39 @@ departed | t
notes | new stay note 3
api.moorages
-[ RECORD 1 ]---+---------------------------------------------------
id | 1
vessel_id | t
name | patch moorage name 3
country |
stay_code | 2
stay_duration | PT1M
reference_count | 1
latitude | 60.0776666666667
longitude | 23.5308666666667
geog | 0101000020E6100000B9DEBBE0E687374052A938FBF0094E40
home_flag | t
notes | new moorage note 3
-[ RECORD 2 ]---+---------------------------------------------------
id | 2
vessel_id | t
name | Norra hamnen
country |
stay_code | 4
stay_duration | PT2M
reference_count | 2
latitude | 59.9768833333333
longitude | 23.4321
geog | 0101000020E6100000029A081B9E6E3740455658830AFD4D40
home_flag | f
notes |
-[ RECORD 3 ]---+---------------------------------------------------
id | 3
vessel_id | t
name | Ekenäs
country | fi
stay_code | 1
stay_duration |
reference_count | 1
latitude | 59.86
longitude | 23.3657666666667
geog | 0101000020E6100000E84C5FE2A25D3740AE47E17A14EE4D40
home_flag | f
notes |
-[ RECORD 1 ]-------------------------------------------------
id | 1
vessel_id | t
name | patch moorage name 3
country | fi
stay_code | 2
latitude | 60.0776666666667
longitude | 23.5308666666667
geog | 0101000020E6100000B9DEBBE0E687374052A938FBF0094E40
home_flag | t
notes | new moorage note 3
-[ RECORD 2 ]-------------------------------------------------
id | 2
vessel_id | t
name | Norra hamnen
country | fi
stay_code | 4
latitude | 59.9768833333333
longitude | 23.4321
geog | 0101000020E6100000029A081B9E6E3740455658830AFD4D40
home_flag | f
notes |
-[ RECORD 3 ]-------------------------------------------------
id | 3
vessel_id | t
name | Ekenäs
country | fi
stay_code | 1
latitude | 59.86
longitude | 23.3657666666667
geog | 0101000020E6100000E84C5FE2A25D3740AE47E17A14EE4D40
home_flag | f
notes |
api.moorages_view
-[ RECORD 1 ]-------+---------------------
@@ -257,15 +252,67 @@ id | 2
moorage | Norra hamnen
default_stay | Dock
default_stay_id | 4
total_stay | 0
total_duration | PT2M
arrivals_departures | 2
total_duration | PT2M
-[ RECORD 2 ]-------+---------------------
id | 1
moorage | patch moorage name 3
default_stay | Anchor
default_stay_id | 2
total_stay | 0
total_duration | PT1M
arrivals_departures | 1
total_duration | PT1M
-[ RECORD 3 ]-------+---------------------
id | 3
moorage | Ekenäs
default_stay | Unknown
default_stay_id | 1
arrivals_departures | 1
total_duration | PT0S
api.moorage_view
-[ RECORD 1 ]------+---------------------------------------------------
id | 3
name | Ekenäs
default_stay | Unknown
latitude | 59.86
longitude | 23.3657666666667
geog | 0101000020E6100000E84C5FE2A25D3740AE47E17A14EE4D40
home | f
notes |
logs_count | 1
stays_count | 0
stay_first_seen_id |
stay_last_seen_id |
stay_first_seen | f
stay_last_seen | f
-[ RECORD 2 ]------+---------------------------------------------------
id | 2
name | Norra hamnen
default_stay | Dock
latitude | 59.9768833333333
longitude | 23.4321
geog | 0101000020E6100000029A081B9E6E3740455658830AFD4D40
home | f
notes |
logs_count | 2
stays_count | 1
stay_first_seen_id | 2
stay_last_seen_id | 2
stay_first_seen | t
stay_last_seen | t
-[ RECORD 3 ]------+---------------------------------------------------
id | 1
name | patch moorage name 3
default_stay | Anchor
latitude | 60.0776666666667
longitude | 23.5308666666667
geog | 0101000020E6100000B9DEBBE0E687374052A938FBF0094E40
home | t
notes | new moorage note 3
logs_count | 1
stays_count | 1
stay_first_seen_id | 1
stay_last_seen_id | 1
stay_first_seen | t
stay_last_seen | t

53
tests/sql/logbook.sql Normal file
View File

@@ -0,0 +1,53 @@
---------------------------------------------------------------------------
-- Listing
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
-- output display format
\x on
\echo 'Validate logbook operation'
-- set user_id
SELECT a.user_id as "user_id" FROM auth.accounts a WHERE a.email = 'demo+kapla@openplotter.cloud' \gset
--\echo :"user_id"
SELECT set_config('user.id', :'user_id', false) IS NOT NULL as user_id;
-- set vessel_id
SELECT v.vessel_id as "vessel_id" FROM auth.vessels v WHERE v.owner_email = 'demo+kapla@openplotter.cloud' \gset
--\echo :"vessel_id"
SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
-- Count logbook for user
\echo 'logbook'
SELECT count(*) FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
\echo 'logbook'
-- track_geom and track_geojson are now dynamic from mobilitydb
SELECT name,_from_time IS NOT NULL AS _from_time_not_null, _to_time IS NOT NULL AS _to_time_not_null, trajectory(trip) AS track_geom, distance,duration,avg_speed,max_speed,max_wind_speed,notes,extra FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false) ORDER BY id ASC;
--
-- user_role
SET ROLE user_role;
\echo 'ROLE user_role current_setting'
SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
-- Count logbook for user
\echo 'logbook'
SELECT count(*) FROM api.logbook;
\echo 'logbook'
-- track_geom and track_geojson are now dynamic from mobilitydb
SELECT name,_from_time IS NOT NULL AS _from_time_not_null, _to_time IS NOT NULL AS _to_time_not_null, trajectory(trip) AS track_geom, distance,duration,avg_speed,max_speed,max_wind_speed,notes,extra FROM api.logbook ORDER BY id ASC;
-- Delete logbook for user
\echo 'Delete logbook for user kapla'
SELECT api.delete_logbook_fn(5); -- delete Tropics Zone
SELECT api.delete_logbook_fn(6); -- delete Alaska Zone
-- Merge logbook for user
\echo 'Merge logbook for user kapla'
SELECT api.merge_logbook_fn(1,2);

View File

@@ -0,0 +1,138 @@
current_database
------------------
signalk
(1 row)
You are now connected to database "signalk" as user "username".
Expanded display is on.
Validate logbook operation
-[ RECORD 1 ]
user_id | t
-[ RECORD 1 ]
vessel_id | t
logbook
-[ RECORD 1 ]
count | 4
logbook
-[ RECORD 1 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | patch log name 3
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E61000001A000000B0DEBBE0E68737404DA938FBF0094E4020D26F5F0786374030BB270F0B094E400C6E7ED60F843740AA60545227084E40D60FC48C03823740593CE27D42074E407B39D9F322803740984C158C4A064E4091ED7C3F357E3740898BB63D54054E40A8A1208B477C37404BA3DC9059044E404C5CB4EDA17A3740C4F856115B034E40A9A44E4013793740D8F0F44A59024E40E4839ECDAA773740211FF46C56014E405408D147067637408229F03B73004E40787AA52C43743740F90FE9B7AFFF4D40F8098D4D18723740C217265305FF4D4084E82303537037409A2D464AA0FE4D4022474DCE636F37402912396A72FE4D408351499D806E374088CFB02B40FE4D4076711B0DE06D3740B356C7040FFE4D404EAC66B0BC6E374058A835CD3BFE4D40D7A3703D0A6F3740D3E10EC15EFE4D4087602F277B6E3740A779C7293AFE4D402063EE5A426E3740B5A679C729FE4D40381DEE10EC6D37409ECA7C1A0AFE4D40E2C46A06CB6B37400A43F7BF36FD4D4075931804566E3740320BDAD125FD4D409A2D464AA06E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D40
distance | 7.6447
duration | PT27M
avg_speed | 3.6357142857142852
max_speed | 6.1
max_wind_speed | 22.1
notes | new log note 3
extra | {"tags": ["tag_name"], "metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}, "avg_wind_speed": 14.549999999999999}
-[ RECORD 2 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Norra hamnen to Ekenäs
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E610000013000000029A081B9E6E37404A5658830AFD4D404806A6C0EF6C3740DA1B7C6132FD4D40FE65F7E461693740226C787AA5FC4D407DD3E10EC1663740B29DEFA7C6FB4D40898BB63D5465374068479724BCFA4D409A5271F6E1633740B6847CD0B3F94D40431CEBE236623740E9263108ACF84D402C6519E2585F37407E678EBFC7F74D4096218E75715B374027C5B45C23F74D402AA913D044583740968DE1C46AF64D405AF5B9DA8A5537407BEF829B9FF54D407449C2ABD253374086C954C1A8F44D407D1A0AB278543740F2B0506B9AF34D409D11A5BDC15737406688635DDCF24D4061C3D32B655937402CAF6F3ADCF14D408988888888583740B3319C58CDF04D4021FAC8C0145837408C94405DB7EF4D40B8F9593F105B37403DC0804BEDEE4D40DE4C5FE2A25D3740AE47E17A14EE4D40
distance | 8.8968
duration | PT20M
avg_speed | 5.4523809523809526
max_speed | 6.5
max_wind_speed | 37.2
notes |
extra | {"metrics": {"propulsion.main.runTime": "PT11S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}, "avg_wind_speed": 10.476190476190478}
-[ RECORD 3 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Tropics Zone
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E610000002000000A4E85E0D58934FC000DC509B80052C40BC069B43D64553C090510727F3BD2940
distance | 123
duration |
avg_speed |
max_speed |
max_wind_speed |
notes |
extra |
-[ RECORD 4 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Alaska Zone
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E610000002000000FDB11ED079F261C090C47F1861B84D40D3505124540B63C09C091C1C8D4A4C40
distance | 1234
duration |
avg_speed |
max_speed |
max_wind_speed |
notes |
extra |
SET
ROLE user_role current_setting
-[ RECORD 1 ]
vessel_id | t
logbook
-[ RECORD 1 ]
count | 4
logbook
-[ RECORD 1 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | patch log name 3
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E61000001A000000B0DEBBE0E68737404DA938FBF0094E4020D26F5F0786374030BB270F0B094E400C6E7ED60F843740AA60545227084E40D60FC48C03823740593CE27D42074E407B39D9F322803740984C158C4A064E4091ED7C3F357E3740898BB63D54054E40A8A1208B477C37404BA3DC9059044E404C5CB4EDA17A3740C4F856115B034E40A9A44E4013793740D8F0F44A59024E40E4839ECDAA773740211FF46C56014E405408D147067637408229F03B73004E40787AA52C43743740F90FE9B7AFFF4D40F8098D4D18723740C217265305FF4D4084E82303537037409A2D464AA0FE4D4022474DCE636F37402912396A72FE4D408351499D806E374088CFB02B40FE4D4076711B0DE06D3740B356C7040FFE4D404EAC66B0BC6E374058A835CD3BFE4D40D7A3703D0A6F3740D3E10EC15EFE4D4087602F277B6E3740A779C7293AFE4D402063EE5A426E3740B5A679C729FE4D40381DEE10EC6D37409ECA7C1A0AFE4D40E2C46A06CB6B37400A43F7BF36FD4D4075931804566E3740320BDAD125FD4D409A2D464AA06E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D40
distance | 7.6447
duration | PT27M
avg_speed | 3.6357142857142852
max_speed | 6.1
max_wind_speed | 22.1
notes | new log note 3
extra | {"tags": ["tag_name"], "metrics": {"propulsion.main.runTime": "PT10S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}, "avg_wind_speed": 14.549999999999999}
-[ RECORD 2 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Norra hamnen to Ekenäs
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E610000013000000029A081B9E6E37404A5658830AFD4D404806A6C0EF6C3740DA1B7C6132FD4D40FE65F7E461693740226C787AA5FC4D407DD3E10EC1663740B29DEFA7C6FB4D40898BB63D5465374068479724BCFA4D409A5271F6E1633740B6847CD0B3F94D40431CEBE236623740E9263108ACF84D402C6519E2585F37407E678EBFC7F74D4096218E75715B374027C5B45C23F74D402AA913D044583740968DE1C46AF64D405AF5B9DA8A5537407BEF829B9FF54D407449C2ABD253374086C954C1A8F44D407D1A0AB278543740F2B0506B9AF34D409D11A5BDC15737406688635DDCF24D4061C3D32B655937402CAF6F3ADCF14D408988888888583740B3319C58CDF04D4021FAC8C0145837408C94405DB7EF4D40B8F9593F105B37403DC0804BEDEE4D40DE4C5FE2A25D3740AE47E17A14EE4D40
distance | 8.8968
duration | PT20M
avg_speed | 5.4523809523809526
max_speed | 6.5
max_wind_speed | 37.2
notes |
extra | {"metrics": {"propulsion.main.runTime": "PT11S"}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}, "avg_wind_speed": 10.476190476190478}
-[ RECORD 3 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Tropics Zone
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E610000002000000A4E85E0D58934FC000DC509B80052C40BC069B43D64553C090510727F3BD2940
distance | 123
duration |
avg_speed |
max_speed |
max_wind_speed |
notes |
extra |
-[ RECORD 4 ]-------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Alaska Zone
_from_time_not_null | t
_to_time_not_null | t
track_geom | 0102000020E610000002000000FDB11ED079F261C090C47F1861B84D40D3505124540B63C09C091C1C8D4A4C40
distance | 1234
duration |
avg_speed |
max_speed |
max_wind_speed |
notes |
extra |
Delete logbook for user kapla
-[ RECORD 1 ]-----+--
delete_logbook_fn | t
-[ RECORD 1 ]-----+--
delete_logbook_fn | t
Merge logbook for user kapla
-[ RECORD 1 ]----+-
merge_logbook_fn |

36
tests/sql/maplapse.sql Normal file
View File

@@ -0,0 +1,36 @@
---------------------------------------------------------------------------
-- Listing
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
-- output display format
\x on
-- Assign vessel_id var
SELECT v.vessel_id as "vessel_id_kapla" FROM auth.vessels v WHERE v.owner_email = 'demo+kapla@openplotter.cloud' \gset
SELECT v.vessel_id as "vessel_id_aava" FROM auth.vessels v WHERE v.owner_email = 'demo+aava@openplotter.cloud' \gset
-- user_role
SET ROLE user_role;
SELECT set_config('vessel.id', :'vessel_id_kapla', false) IS NOT NULL as vessel_id;
-- insert fake request maplapse
\echo 'Insert fake request maplapse'
SELECT api.maplapse_record_fn('Kapla,?start_log=1&end_log=1&height=100vh');
-- maplapse_role
SET ROLE maplapse_role;
\echo 'GET pending maplapse task'
SELECT id as maplapse_id from process_queue where channel = 'maplapse_video' and processed is null order by stored asc limit 1 \gset
SELECT count(id) from process_queue where channel = 'maplapse_video' and processed is null limit 1;
\echo 'Update process on completion'
UPDATE process_queue SET processed = NOW() WHERE id = :'maplapse_id';
\echo 'Insert video availability notification in process queue'
INSERT INTO process_queue ("channel", "payload", "ref_id", "stored") VALUES ('new_video', CONCAT('video_', :'vessel_id_kapla'::TEXT, '_1', '_1.mp4'), :'vessel_id_kapla'::TEXT, NOW());

View File

@@ -0,0 +1,24 @@
current_database
------------------
signalk
(1 row)
You are now connected to database "signalk" as user "username".
Expanded display is on.
SET
-[ RECORD 1 ]
vessel_id | t
Insert fake request maplapse
-[ RECORD 1 ]------+--
maplapse_record_fn | t
SET
GET pending maplapse task
-[ RECORD 1 ]
count | 1
Update process on completion
UPDATE 1
Insert video availability notification in process queue
INSERT 0 1

84
tests/sql/metadata.sql Normal file
View File

@@ -0,0 +1,84 @@
---------------------------------------------------------------------------
-- Listing
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
-- output display format
\x on
SELECT count(*) as count_eq_2 FROM api.metadata m;
SELECT v.vessel_id as "vessel_id" FROM auth.vessels v WHERE v.owner_email = 'demo+kapla@openplotter.cloud' \gset
--\echo :"vessel_id"
SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
-- user_role
SET ROLE user_role;
\echo 'api.metadata details'
SELECT vessel_id IS NOT NULL AS vessel_id_not_null, m.name, m.mmsi, m.length, m.beam, m.height, m.ship_type, m.plugin_version, m.signalk_version, m.time IS NOT NULL AS time, m.active, configuration, available_keys FROM api.metadata AS m ORDER BY m.name ASC;
\echo 'api.metadata get configuration'
select configuration from api.metadata; --WHERE vessel_id = current_setting('vessel.id', false);
\echo 'api.metadata update configuration'
UPDATE api.metadata SET configuration = '{ "depthKey": "environment.depth.belowTransducer" }'; --WHERE vessel_id = current_setting('vessel.id', false);
\echo 'api.metadata get configuration with new value'
select configuration->'depthKey' AS depthKey, configuration->'update_at' IS NOT NULL AS update_at_not_null from api.metadata; --WHERE vessel_id = current_setting('vessel.id', false);
\echo 'api.metadata get configuration base on update_at value'
select configuration->'depthKey' AS depthKey, configuration->'update_at' IS NOT NULL AS update_at_not_null from api.metadata WHERE vessel_id = current_setting('vessel.id', false) AND configuration->>'update_at' <= to_char(NOW(), 'YYYY-MM-DD"T"HH24:MI:SS"Z"');
-- Upsert make_model on metadata_ext table
\echo 'api.metadata_ext set make_model'
INSERT INTO api.metadata_ext (vessel_id, make_model)
VALUES (current_setting('vessel.id', false), 'my super yacht')
ON CONFLICT (vessel_id) DO UPDATE
SET make_model = EXCLUDED.make_model;
-- Upsert polar on metadata_ext table
\echo 'api.metadata_ext set polar'
INSERT INTO api.metadata_ext (vessel_id, polar)
VALUES (current_setting('vessel.id', false), 'twa/tws;4;6;8;10;12;14;16;20;24\n0;0;0;0;0;0;0;0;0;0')
ON CONFLICT (vessel_id) DO UPDATE
SET polar = EXCLUDED.polar;
-- Upsert image on metadata_ext table
\echo 'api.metadata_ext set image/image_b64'
INSERT INTO api.metadata_ext (vessel_id, image_b64)
VALUES (current_setting('vessel.id', false), 'iVBORw0KGgoAAAANSUhEUgAAAMgAAAAyCAIAAACWMwO2AAABNklEQVR4nO3bwY6CMBiF0XYy7//KzIKk6VBjiMMNk59zVljRIH6WsrBv29bgal93HwA1CYsIYREhLCKERYSwiBAWEcIiQlhECIsIYREhLCKERYSwiBAWEcIiQlhECIsIYREhLCK+7z6A/6j33lq75G8m')
ON CONFLICT (vessel_id) DO UPDATE
SET image_b64 = EXCLUDED.image_b64;
-- Ensure make_model on metadata_ext table is updated
\echo 'api.metadata_ext get make_model'
SELECT make_model FROM api.metadata_ext; --WHERE vessel_id = current_setting('vessel.id', false);
-- Ensure polar_updated_at on metadata_ext table is updated by trigger
\echo 'api.metadata_ext get polar_updated_at'
SELECT polar,polar_updated_at IS NOT NULL AS polar_updated_at_not_null FROM api.metadata_ext; --WHERE vessel_id = current_setting('vessel.id', false);
-- Ensure image_updated_at on metadata_ext table is updated by trigger
\echo 'api.metadata_ext get image_updated_at'
SELECT image_b64 IS NULL AS image_b64_is_null,image IS NOT NULL AS image_not_null,image_updated_at IS NOT NULL AS image_updated_at_not_null FROM api.metadata_ext; --WHERE vessel_id = current_setting('vessel.id', false);
-- vessel_role
SET ROLE vessel_role;
\echo 'api.metadata get configuration with new value as vessel'
select configuration->'depthKey' AS depthKey, configuration->'update_at' IS NOT NULL AS update_at_not_null from api.metadata; -- WHERE vessel_id = current_setting('vessel.id', false);
\echo 'api.metadata get configuration base on update_at value as vessel'
select configuration->'depthKey' AS depthKey, configuration->'update_at' IS NOT NULL AS update_at_not_null from api.metadata WHERE vessel_id = current_setting('vessel.id', false) AND configuration->>'update_at' <= to_char(NOW(), 'YYYY-MM-DD"T"HH24:MI:SS"Z"');
-- api_anonymous
SET ROLE api_anonymous;
\echo 'api_anonymous get vessel image'
SELECT api.vessel_image(current_setting('vessel.id', false)) IS NOT NULL AS vessel_image_not_null;

View File

@@ -0,0 +1,83 @@
current_database
------------------
signalk
(1 row)
You are now connected to database "signalk" as user "username".
Expanded display is on.
-[ RECORD 1 ]-
count_eq_2 | 2
-[ RECORD 1 ]
vessel_id | t
SET
api.metadata details
-[ RECORD 1 ]------+----------------
vessel_id_not_null | t
name | kapla
mmsi | 123456789
length | 12
beam | 10
height | 24
ship_type | 36
plugin_version | 0.0.1
signalk_version | signalk_version
time | t
active | t
configuration |
available_keys |
api.metadata get configuration
-[ RECORD 1 ]-+-
configuration |
api.metadata update configuration
UPDATE 1
api.metadata get configuration with new value
-[ RECORD 1 ]------+------------------------------------
depthkey | "environment.depth.belowTransducer"
update_at_not_null | t
api.metadata get configuration base on update_at value
-[ RECORD 1 ]------+------------------------------------
depthkey | "environment.depth.belowTransducer"
update_at_not_null | t
api.metadata_ext set make_model
INSERT 0 1
api.metadata_ext set polar
INSERT 0 1
api.metadata_ext set image/image_b64
INSERT 0 1
api.metadata_ext get make_model
-[ RECORD 1 ]--------------
make_model | my super yacht
api.metadata_ext get polar_updated_at
-[ RECORD 1 ]-------------+-----------------------------------------------------
polar | twa/tws;4;6;8;10;12;14;16;20;24\n0;0;0;0;0;0;0;0;0;0
polar_updated_at_not_null | t
api.metadata_ext get image_updated_at
-[ RECORD 1 ]-------------+--
image_b64_is_null | f
image_not_null | t
image_updated_at_not_null | t
SET
api.metadata get configuration with new value as vessel
-[ RECORD 1 ]------+------------------------------------
depthkey | "environment.depth.belowTransducer"
update_at_not_null | t
api.metadata get configuration base on update_at value as vessel
-[ RECORD 1 ]------+------------------------------------
depthkey | "environment.depth.belowTransducer"
update_at_not_null | t
SET
api_anonymous get vessel image
-[ RECORD 1 ]---------+--
vessel_image_not_null | t

96
tests/sql/mobilitydb.sql Normal file
View File

@@ -0,0 +1,96 @@
---------------------------------------------------------------------------
-- Listing
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
-- output display format
\x on
-- Assign vessel_id var
SELECT v.vessel_id as "vessel_id_kapla" FROM auth.vessels v WHERE v.owner_email = 'demo+kapla@openplotter.cloud' \gset
SELECT v.vessel_id as "vessel_id_aava" FROM auth.vessels v WHERE v.owner_email = 'demo+aava@openplotter.cloud' \gset
-- user_role
SET ROLE user_role;
-- Switch user as aava
SELECT set_config('vessel.id', :'vessel_id_aava', false) IS NOT NULL as vessel_id;
-- Update notes
\echo 'Add a note for an entry from a trip'
-- Get original value, should be empty
SELECT numInstants(trip), valueAtTimestamp(trip_notes,timestampN(trip,13)) from api.logbook where id = 3;
-- Create the string
SELECT concat('["fishing"@', timestampN(trip,13),',""@',timestampN(trip,14),']') as to_be_update FROM api.logbook where id = 3 \gset
--\echo :to_be_update
-- Update the notes
SELECT api.update_trip_notes_fn(3, :'to_be_update');
-- Compare with previous value, should include "fishing"
SELECT valueAtTimestamp(trip_notes,timestampN(trip,13)) from api.logbook where id = 3;
-- Delete notes
\echo 'Delete an entry from a trip'
-- Get original value, should be 45
SELECT numInstants(trip), jsonb_array_length(api.export_logbook_geojson_point_trip_fn(id)->'features') from api.logbook where id = 3;
-- Extract the timestamps of the invalid coords
--SELECT timestampN(trip,14) as "to_be_delete" FROM api.logbook where id = 3 \gset
SELECT concat('[', timestampN(trip,14),',',timestampN(trip,15),')') as to_be_delete FROM api.logbook where id = 3 \gset
--\echo :to_be_delete
-- Delete the entry for all trip sequence
SELECT api.delete_trip_entry_fn(3, :'to_be_delete');
-- Compare with previous value, should be 44
SELECT numInstants(trip), jsonb_array_length(api.export_logbook_geojson_point_trip_fn(id)->'features') from api.logbook where id = 3;
-- Export PostGIS geography from a trip
\echo 'Export PostGIS geography from trajectory'
--SELECT ST_IsValid(trajectory(trip)::geometry) IS TRUE FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
SELECT trajectory(trip)::geometry FROM api.logbook WHERE id = 3;
-- Export GeoJSON from a trip
\echo 'Export GeoJSON with properties from a trip'
SELECT jsonb_array_length(api.export_logbook_geojson_point_trip_fn(3)->'features');
-- Export GPX from a trip
\echo 'Export GPX from a trip'
SELECT api.export_logbook_gpx_trip_fn(3) IS NOT NULL;
-- Export KML from a trip
\echo 'Export KML from a trip'
SELECT api.export_logbook_kml_trip_fn(3) IS NOT NULL;
-- Switch user as kapla
SELECT set_config('vessel.id', :'vessel_id_kapla', false) IS NOT NULL as vessel_id;
-- Export timelapse as Geometry LineString from a trip
\echo 'Export timelapse as Geometry LineString from a trip'
--SELECT api.export_logbooks_geojson_linestring_trips_fn(1,2) FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
-- Test geometry_type and num_properties
-- propoerties include endtimestamp and starttimestamp
WITH geojson_output AS (
SELECT api.export_logbooks_geojson_linestring_trips_fn(1, 2) AS geojson
FROM api.logbook
WHERE vessel_id = current_setting('vessel.id', false)
)
SELECT
--geojson
geojson->'features'->0->'geometry'->>'type' AS geometry_type,
--jsonb_array_length(jsonb_object_keys(geojson->'features'->0->'properties')::JSONB),
--jsonb_array_length(jsonb_object_keys(geojson->'features')) AS num_geometry,
(SELECT COUNT(*) FROM jsonb_object_keys(geojson->'features'->0->'properties')) AS num_properties
FROM geojson_output;
-- Export timelapse as Geometry Point from a trip
\echo 'Export timelapse as Geometry Point from a trip'
SELECT api.export_logbooks_geojson_point_trips_fn(1,2) IS NOT NULL FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
-- Export GPX from trips
\echo 'Export GPX from trips'
SELECT api.export_logbooks_gpx_trips_fn(1,2) IS NOT NULL FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
-- Export KML from trips
\echo 'Export KML from trips'
SELECT api.export_logbooks_kml_trips_fn(1,2) IS NOT NULL FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);

View File

@@ -0,0 +1,70 @@
current_database
------------------
signalk
(1 row)
You are now connected to database "signalk" as user "username".
Expanded display is on.
SET
-[ RECORD 1 ]
vessel_id | t
Add a note for an entry from a trip
-[ RECORD 1 ]----+---
numinstants | 45
valueattimestamp |
-[ RECORD 1 ]--------+-
update_trip_notes_fn |
-[ RECORD 1 ]----+--------
valueattimestamp | fishing
Delete an entry from a trip
-[ RECORD 1 ]------+---
numinstants | 45
jsonb_array_length | 45
-[ RECORD 1 ]--------+-
delete_trip_entry_fn |
-[ RECORD 1 ]------+---
numinstants | 44
jsonb_array_length | 44
Export PostGIS geography from trajectory
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
trajectory | 0102000020E61000002B0000001CACA4BA25BC3840217B18B556DC4D40569FABADD8BB384012EA33B10ADC4D408A65E9F989BB384018A023A8D0DB4D40B174F4AE30BB38409951E2299ADB4D407C348B06DFBA3840443A973D64DB4D40FE4465C39ABA38409313927131DB4D402C9CA4F963BA384029E6C52EF6DA4D4010559D7A49BA384019E25817B7DA4D401BF0F96184BC3840F868160DBEDC4D4095B7239C16BC38401DDB7C6D47DC4D40559FABADD8BB384012EA33B10ADC4D408A65E9F989BB384018A023A8D0DB4D40B274F4AE30BB38409951E2299ADB4D407C348B06DFBA3840443A973D64DB4D40FE4465C39ABA38409313927131DB4D402C9CA4F963BA384029E6C52EF6DA4D4010559D7A49BA384019E25817B7DA4D401BF0F96184BC3840F868160DBEDC4D4098B1B2C755BC38404D52F41B81DC4D4095B7239C16BC38401DDB7C6D47DC4D407448C55AD7BB38406CABFEAD09DC4D40D8367B5688BB38400D54C6BFCFDB4D40B274F4AE30BB38409951E2299ADB4D401256BEC2DDBA3840EB22E06B63DB4D40FE4465C39ABA38409313927131DB4D402C9CA4F963BA384029E6C52EF6DA4D40407D152A49BA3840CDA66D0DB6DA4D40BDBFE6C182BC3840F9269710BDDC4D4098B1B2C755BC38404D52F41B81DC4D4024308CAA15BC3840B85DC36746DC4D407448C55AD7BB38406CABFEAD09DC4D40D8367B5688BB38400D54C6BFCFDB4D408224A24E2FBB384070BEC74F99DB4D4028B0A5EC99BA3840F90C4D7E30DB4D402C4080B163BA38402FA93528F5DA4D40407D152A49BA3840CDA66D0DB6DA4D40BDBFE6C182BC3840F9269710BDDC4D4099B1B2C755BC38404D52F41B81DC4D4024308CAA15BC3840B85DC36746DC4D407448C55AD7BB38406CABFEAD09DC4D40D8367B5688BB38400D54C6BFCFDB4D408224A24E2FBB384070BEC74F99DB4D401156BEC2DDBA3840EB22E06B63DB4D40
Export GeoJSON with properties from a trip
-[ RECORD 1 ]------+---
jsonb_array_length | 44
Export GPX from a trip
-[ RECORD 1 ]
?column? | t
Export KML from a trip
-[ RECORD 1 ]
?column? | t
-[ RECORD 1 ]
vessel_id | t
Export timelapse as Geometry LineString from a trip
-[ RECORD 1 ]--+-----------
geometry_type | LineString
num_properties | 35
Export timelapse as Geometry Point from a trip
-[ RECORD 1 ]
?column? | t
Export GPX from trips
-[ RECORD 1 ]
?column? | t
Export KML from trips
-[ RECORD 1 ]
?column? | t

View File

@@ -51,3 +51,12 @@ select count(*) from api.monitoring_temperatures;
-- Test monitoring for user
--select * from api.monitoring_humidity;
select count(*) from api.monitoring_humidity;
\echo 'Test metersToKnots'
select public.metersToKnots(1);
\echo 'Test radiantToDegrees'
select public.radiantToDegrees(1);
\echo 'Test valToPercent'
select public.valToPercent(1);

View File

@@ -22,17 +22,29 @@ count | 21
Test monitoring_view3 for user
-[ RECORD 1 ]
count | 3736
count | 3775
Test monitoring_voltage for user
-[ RECORD 1 ]
count | 47
count | 48
Test monitoring_temperatures for user
-[ RECORD 1 ]
count | 120
count | 121
Test monitoring_humidity for user
-[ RECORD 1 ]
count | 0
Test metersToKnots
-[ RECORD 1 ]-+-----
meterstoknots | 1.94
Test radiantToDegrees
-[ RECORD 1 ]----+---
radianttodegrees | 57
Test valToPercent
-[ RECORD 1 ]+----
valtopercent | 100

53
tests/sql/qgis.sql Normal file
View File

@@ -0,0 +1,53 @@
---------------------------------------------------------------------------
-- Listing
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
-- output display format
\x on
-- Assign vessel_id var
SELECT v.vessel_id as "vessel_id_kapla" FROM auth.vessels v WHERE v.owner_email = 'demo+kapla@openplotter.cloud' \gset
SELECT v.vessel_id as "vessel_id_aava" FROM auth.vessels v WHERE v.owner_email = 'demo+aava@openplotter.cloud' \gset
-- qgis
SET ROLE qgis_role;
-- Get BBOX Extent from SQL query for a log:
-- "^/log_(\w+)_(\d+).png$"
-- "^/log_(\w+)_(\d+)_sat.png$
-- require a log_id, optional image width and height, scale_out
\echo 'Get BBOX Extent from SQL query for a log: "^/log_(\w+)_(\d+).png$"'
SELECT public.qgis_bbox_py_fn(null, 1);
SELECT public.qgis_bbox_py_fn(null, 3);
-- "^/log_(\w+)_(\d+)_line.png$"
\echo 'Get BBOX Extent from SQL query for a log as line: "^/log_(\w+)_(\d+)_line.png$"'
SELECT public.qgis_bbox_py_fn(null, 1, 333, 216, False);
SELECT public.qgis_bbox_py_fn(null, 3, 333, 216, False);
-- Get BBOX Extent from SQL query for all logs by vessel_id
-- "^/logs_(\w+)_(\d+).png$"
-- require a vessel_id, optional image width and height, scale_out
\echo 'Get BBOX Extent from SQL query for all logs by vessel_id: "^/logs_(\w+)_(\d+).png$"'
SELECT public.qgis_bbox_py_fn(:'vessel_id_kapla'::TEXT);
SELECT public.qgis_bbox_py_fn(:'vessel_id_aava'::TEXT);
-- Get BBOX Extent from SQL query for all logs by vessel_id
-- "^/logs_(\w+)_(\d+).png$"
-- require a vessel_id, optional image width and height, scale_out
\echo 'Get BBOX Extent from SQL query for a trip by vessel_id: "^/trip_(\w+)_(\d+)_(\d+).png$"'
SELECT public.qgis_bbox_py_fn(:'vessel_id_kapla'::TEXT, 1, 2);
SELECT public.qgis_bbox_py_fn(:'vessel_id_aava'::TEXT, 3, 4);
-- require a vessel_id, optional image width and height, scale_out as in Apache
\echo 'Get BBOX Extent from SQL query for a trip by vessel_id: "^/trip_((\w+)_(\d+)_(\d+)).png$"'
SELECT public.qgis_bbox_trip_py_fn(CONCAT(:'vessel_id_kapla'::TEXT, '_', 1, '_',2));
SELECT public.qgis_bbox_trip_py_fn(CONCAT(:'vessel_id_aava'::TEXT, '_', 3, '_', 4));
--SELECT set_config('vessel.id', :'vessel_id_kapla', false) IS NOT NULL as vessel_id;
-- SQL request from QGIS to fetch the necessary data base on vessel_id
--SELECT id, vessel_id, name as logname, ST_Transform(track_geom, 3857) as track_geom, ROUND(distance, 2) as distance, ROUND(EXTRACT(epoch FROM duration)/3600,2) as duration,_from_time,_to_time FROM api.logbook where track_geom is not null and _to_time is not null ORDER BY _from_time DESC;
--SELECT count(*) FROM api.logbook WHERE track_geom IS NOT NULL AND _to_time iIS NOT NULL;
SELECT count(*) FROM api.logbook WHERE trip IS NOT NULL AND _to_time IS NOT NULL;

Some files were not shown because too many files have changed in this diff Show More