99 Commits

Author SHA1 Message Date
xbgmsharp
9532075bc4 Update frontend to release 0.0.7 2023-10-25 22:18:56 +02:00
xbgmsharp
5996b4d483 Upgrade to the latest version and prepare for next release 2023-10-25 12:40:49 +02:00
xbgmsharp
fdd6fc18e1 Update logbook tests, no more track_gpx field 2023-10-25 12:08:11 +02:00
xbgmsharp
af3866fafe Add new test for export logbooks endpoint 2023-10-25 12:07:07 +02:00
xbgmsharp
53daaa9947 Update vessel_fn to handle empty vessel name from signalk in api.metadata 2023-10-25 09:28:10 +02:00
xbgmsharp
3fed9e0b6a Update public.logbook_update_geojson_fn, expose more data
Update public.logbook_metrics_dwithin_fn, expend large zone for invalid logbook, increase to 15metres.
Disable public.logbook_update_gpx_fn, back to using api.export_logbook_gpx_fn.
Add public.get_app_url_fn, allow limited access to the ap url settings.
2023-10-25 09:27:48 +02:00
xbgmsharp
f3168542fd Update api.timelapse_fn, use track_geom as geometry instead of track_geojson as geojson. x10 faster.
Update api.export_logbook_geojson_fn, output as JSONB as per best practice.
Update api.export_logbook_gpx_fn, dynamic from track_geojson on 'geometry'->>'type' = 'Point'.
Add api.export_logbooks_gpx_fn, export multiple logs in a GPX format
Add api.export_logbooks_kml_fn, export multiple logs in a KML format
2023-10-25 09:27:03 +02:00
xbgmsharp
d266485fef Update table api.logbook, remove tack_gpx colunm, update track_geojson to jsonb type 2023-10-25 09:25:51 +02:00
xbgmsharp
8738becd82 Update OpenAPI documentation 2023-10-25 09:24:27 +02:00
xbgmsharp
ad43ca6629 Update auth.accounts wiht comments 2023-10-23 21:39:53 +02:00
xbgmsharp
9368878963 Update .gitignore 2023-10-22 20:46:20 +02:00
xbgmsharp
496491a43a Update README 2023-10-22 20:46:08 +02:00
xbgmsharp
7494b39abc Update api views with standard name-casing 2023-10-22 19:50:13 +02:00
xbgmsharp
74426a75f8 Update maint tests, cleanup 2023-10-22 19:44:34 +02:00
xbgmsharp
9bac88a8cc Update main tests, add openapi.json update 2023-10-22 19:43:46 +02:00
xbgmsharp
c0af53155c Update tests to match new public_id colunm for auth.accounts 2023-10-22 19:43:06 +02:00
xbgmsharp
e0aa6a4d0e Udpate README 2023-10-22 19:39:39 +02:00
xbgmsharp
2425b674f7 Update auth.accounts, remove unused userid, add public_id integer for anonymous and grafana orgId references 2023-10-22 19:34:41 +02:00
xbgmsharp
b7a1462ec6 Update permissions 2023-10-22 19:28:03 +02:00
xbgmsharp
a31d857a6e Update open API documentation 2023-10-22 19:25:58 +02:00
xbgmsharp
dbeb64c0dc Add schemalint github actioni for best practice reference 2023-10-22 19:24:45 +02:00
xbgmsharp
229c219751 Update metrics_trigger_fn, Ignore metric entry if latitude,longitude are equal,l
Update api.logbook and api.stays tables, update type to text instead of varchar
2023-10-22 18:43:54 +02:00
xbgmsharp
3216ffe42c Update cron_process_no_activity_fn, check for vessel with no activity for more than 200 days then send notification
Add cron_process_deactivated_fn, check for inactivity for more than 1 year to send notification and delete data
2023-10-18 23:25:21 +02:00
xbgmsharp
e2e3e5814e Update api.vessel_fn expose data from the signalk rather than from user input
Update api.settings_fn expose accounts.public_id in settings
Update api.eventlogs_view, cleanup formating
Add api.ispublic_fn, check is a page is publicly accessible from user preferences
2023-10-18 23:21:22 +02:00
xbgmsharp
5f709eb71e Update mocha http tests, add export_logbook_kml_fn basic test 2023-10-18 11:45:39 +02:00
xbgmsharp
d5bf36a85c Update message template for notifications 2023-10-18 11:44:54 +02:00
xbgmsharp
90d48c0c52 Update cron job, fix job details cleanup 2023-10-16 21:46:09 +02:00
xbgmsharp
62707aa86c Update export_logbook_gpx_fn, update comment 2023-10-16 16:31:42 +02:00
xbgmsharp
ac187a1480 update kml export, fix linestrng xml export 2023-10-16 12:21:25 +02:00
xbgmsharp
7b0bf7494f Udpate get_user_settings_from_vesselid, optimize query use citext extensions.
Update public.check_jwt, fix typo..
2023-10-16 11:53:44 +02:00
xbgmsharp
c64219e249 Fix init export_gpx2 2023-10-16 00:55:33 +02:00
xbgmsharp
2127dd7fcb Update github actions test, increase delay to allow the db to be loaded from initdb 2023-10-16 00:48:42 +02:00
xbgmsharp
2a583b94dc Update api.export_logbook_kml_fn, allow export in SML content-type 2023-10-16 00:37:16 +02:00
xbgmsharp
147d9946c3 Update connections limit to 20 for anonymous 2023-10-16 00:29:16 +02:00
xbgmsharp
993cfaeaff Update cron jobs 2023-10-13 15:59:21 +02:00
xbgmsharp
3e70283221 Update public.logbook_update_geojson_fn formating 2023-10-13 15:58:40 +02:00
xbgmsharp
0697acb940 Update message template 2023-10-13 15:40:15 +02:00
xbgmsharp
8ca4d03649 Update permisions for user_role and grafana 2023-10-13 15:39:18 +02:00
xbgmsharp
7a465ff532 Update grafana home dashboard with geomap 2023-10-13 14:58:41 +02:00
xbgmsharp
96dce86678 Update api.export_logbook_kml_fn but still not working in REST 2023-10-11 17:33:55 +02:00
xbgmsharp
8dd827f70d Add new SQL tests for update_logbook_observations_fn 2023-10-11 17:31:43 +02:00
xbgmsharp
572f0cd19d Fix job_run_details_cleanup_fn 2023-10-11 17:20:08 +02:00
xbgmsharp
047f243758 Fix api.update_logbook_observations_fn. 2023-10-11 17:19:10 +02:00
xbgmsharp
5c494896c6 Increase default number of connection ofr grafana_auth 2023-10-11 17:18:41 +02:00
xbgmsharp
b7e717afbc Update notificaiton jobs, fix sql query for no_vessel,no_metadata,no_activitye 2023-10-10 13:51:51 +02:00
xbgmsharp
2f3912582a Update api.export_logbook_kml_fn fix export 2023-10-09 20:38:24 +02:00
xbgmsharp
f7b9a54a71 Update KML export with basic LineString support 2023-10-09 20:30:27 +02:00
xbgmsharp
4e554083b0 Add Row level security to vessel_view. Clean up code, unsing pg15 RLS can be apply to view 2023-10-09 16:27:19 +02:00
xbgmsharp
69b6490534 Update api.vessels_view LIMIT to last metrics.
Add Rowlevel security measures to view
2023-10-09 16:26:02 +02:00
xbgmsharp
8b336f6f9b Add explicit schema when public 2023-10-09 16:22:30 +02:00
xbgmsharp
ef5868d412 Update public.logbook_update_gpx_fn to be display in order
Update public.logbook_update_geojson_fn formating
2023-10-09 16:13:56 +02:00
xbgmsharp
ce532bbb4d Fix typo KLM KLM file extensions 2023-10-09 16:13:09 +02:00
xbgmsharp
66999ca9bb Update api.timelapse_fn, order logs by id to be disaply in order
Add draft support for KML export
2023-10-09 16:10:25 +02:00
xbgmsharp
65d0a6fe4b Update frontend to latest dev 2023-10-06 00:22:32 +02:00
xbgmsharp
f7724db62a Update home dashboard 2023-10-06 00:05:37 +02:00
xbgmsharp
01c20651a4 update grafana 2023-10-05 23:32:24 +02:00
xbgmsharp
57d38ba893 Update reverse geocode, fix error on invalid geocode 2023-10-05 00:42:07 +02:00
xbgmsharp
b817a837d0 Release 0.3.0 2023-10-04 16:59:49 +02:00
xbgmsharp
e1fccabba5 Revert notification, send reminders every Sunday 2023-10-04 16:59:20 +02:00
xbgmsharp
b386e307f9 Update tests output, release 0.3.0 and latest version of PostgREST 2023-10-04 16:54:44 +02:00
xbgmsharp
53b25e1656 Add public.delete_vessel_fn, delete all data received from a vessel 2023-10-04 16:40:45 +02:00
xbgmsharp
9c7301deac Update login fn to return 401 Unauthorized vs 403 Forbidden 2023-10-04 16:39:40 +02:00
xbgmsharp
0f08667d3f Update Notifications/Reminders for no vessel & no metadata & no activity to once month 'At 08:01 on day-of-month 6 and on Sunday.' 2023-10-03 22:30:38 +02:00
xbgmsharp
baea4031b8 Update tests to match language check 2023-10-02 21:41:02 +02:00
xbgmsharp
3dcae9199f Update reverse code, enforce english language result 2023-10-02 21:40:44 +02:00
xbgmsharp
e8259d231e Update tests, pg_language change 2023-10-01 22:55:13 +02:00
xbgmsharp
dd81d49895 Update ERD api and public schema change 2023-10-01 22:51:56 +02:00
xbgmsharp
b861e4151c Update open API 2023-10-01 22:21:01 +02:00
xbgmsharp
42cfa34de8 Update tests, update timescale version 2023-10-01 22:13:54 +02:00
xbgmsharp
fa48d23b1a Add new fn for new cron schedule jobs no_vessel,no_metadata,no_activity. Update logging, fix typo 2023-10-01 22:12:15 +02:00
xbgmsharp
a28ea4631b Add new weekly cron notification for no_vessel,no_metadata,no_activity 2023-10-01 22:11:10 +02:00
xbgmsharp
1793dba64f Add new email templates, for no vessel created, no vessel connected, no recent vessel data. 2023-10-01 22:09:50 +02:00
xbgmsharp
b8c70f43b9 Add new helper fn, isdouble 2023-10-01 13:47:45 +02:00
xbgmsharp
be5c3e9a6f Update api.metrics, remove CONSTRAINT on lat and lon to ingore silently invalid value 2023-10-01 13:46:59 +02:00
xbgmsharp
427d30681e Update vessels views and fn, add plugin version, offline status and duration 2023-09-29 22:42:43 +02:00
xbgmsharp
3130394ab0 Update api.export_moorages_geojson_fn, add stay code 2023-09-29 22:40:52 +02:00
xbgmsharp
4e1e890ef7 Fix reset password, ambigouis colunm 2023-09-24 15:39:28 +02:00
xbgmsharp
f46787ca72 Update API documentation 2023-09-22 12:23:21 +02:00
xbgmsharp
6bb3fd7243 Update tests to match github actions results 2023-09-22 12:12:03 +02:00
xbgmsharp
27ab0d590f Update tests to match github actions results 2023-09-22 12:05:34 +02:00
xbgmsharp
e295380bcf Update stay_at table, fix typo in description 2023-09-22 12:05:04 +02:00
xbgmsharp
f9cebf1bda Update tests results 2023-09-22 11:08:17 +02:00
xbgmsharp
51bfc3ca9a Update github actions, Revert previous, extra second in durattion! 2023-09-22 10:52:18 +02:00
xbgmsharp
7d3667726b Update tests restult to match github actions, internval have a 1S extra!?!, 2023-09-21 23:32:39 +02:00
xbgmsharp
5ec987e6bc Update test to math github actions rego reverse result 2023-09-21 23:26:06 +02:00
xbgmsharp
cbef039a26 Update tests results, new interval output style iso, new reverse_geo_py output with jsonb 2023-09-21 23:18:51 +02:00
xbgmsharp
23780e2c01 Update logbook,stays,moorage process functions to match reverse_geocode_py_fn jsonb ouput 2023-09-21 23:17:25 +02:00
xbgmsharp
a1306f06e2 Update reverse_geocode_py_fn, output jsonb to add country_code filed 2023-09-21 23:16:42 +02:00
xbgmsharp
ed90fdd01d Add debug in reverse_geocode_py, github action return different result 2023-09-20 16:56:44 +02:00
xbgmsharp
23bce1ad26 Update test result 2023-09-20 16:56:09 +02:00
xbgmsharp
093992443b Udpate tests results 2023-09-20 00:22:10 +02:00
xbgmsharp
99dea0dbc8 Add default database date and interval style, set interval style to iso_8601 format 2023-09-19 23:29:36 +02:00
xbgmsharp
7edd2be1fd Update api_fn, Add api.stats_stays_fn, Update api.stats_logs_fn, Add logs_by_day_fn 2023-09-19 23:29:15 +02:00
xbgmsharp
e8a899f36c Update metrics_trigger_fn, Add validation check for speedOverGround.
Ignore if speedOverGround is over 40.
2023-09-19 23:29:00 +02:00
xbgmsharp
35940917e0 Update api.moorages_view and api.moorage_view, add stay code and stay description in web view 2023-09-14 09:52:19 +02:00
xbgmsharp
ecb6e666d2 Update api.moorages_view 2023-09-13 21:58:51 +02:00
xbgmsharp
7b11de9d0d Add support for logbook observations jsonb 2023-09-13 21:57:38 +02:00
xbgmsharp
788b6f160b Update Grafana role with monitoring viewse 2023-09-13 21:56:26 +02:00
xbgmsharp
cad4d38595 Update README 2023-08-26 13:56:47 +02:00
39 changed files with 2688 additions and 2765 deletions

55
.github/workflows/db-lint.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: Linting rules on database schema.
on:
pull_request:
paths:
- 'initdb/**'
branches:
- 'main'
push:
branches:
- 'main'
paths:
- 'initdb/**'
tags:
- "*"
workflow_dispatch:
jobs:
schemalint:
name: schemalint
runs-on: ubuntu-latest
steps:
- name: Check out the source
uses: actions/checkout@v3
- name: Set env
run: cp .env.example .env
- name: Pull Docker images
run: docker-compose pull db api
- name: Run PostgSail Database & schemalint
# Environment variables
env:
# The hostname used to communicate with the PostgreSQL service container
PGHOST: localhost
PGPORT: 5432
PGDATABASE: signalk
PGUSER: username
PGPASSWORD: password
run: |
set -eu
source .env
docker-compose stop || true
docker-compose rm || true
docker-compose up -d db && sleep 30 && docker-compose up -d api && sleep 5
docker-compose ps -a
echo ${PGSAIL_API_URL}
curl ${PGSAIL_API_URL}
npm i -D schemalint
npx schemalint
- name: Show the logs
if: always()
run: |
docker-compose logs

View File

@@ -51,7 +51,7 @@ jobs:
source .env source .env
docker-compose stop || true docker-compose stop || true
docker-compose rm || true docker-compose rm || true
docker-compose up -d db && sleep 15 && docker-compose up -d api && sleep 5 docker-compose up -d db && sleep 30 && docker-compose up -d api && sleep 5
docker-compose ps -a docker-compose ps -a
echo ${PGSAIL_API_URL} echo ${PGSAIL_API_URL}
curl ${PGSAIL_API_URL} curl ${PGSAIL_API_URL}

View File

@@ -47,7 +47,7 @@ jobs:
source .env source .env
docker-compose stop || true docker-compose stop || true
docker-compose rm || true docker-compose rm || true
docker-compose up -d db && sleep 15 && docker-compose up -d api && sleep 5 docker-compose up -d db && sleep 30 && docker-compose up -d api && sleep 5
docker-compose ps -a docker-compose ps -a
echo "Test PostgSail Web Unit Test" echo "Test PostgSail Web Unit Test"
docker compose -f docker-compose.dev.yml -f docker-compose.yml up -d web_dev && sleep 100 docker compose -f docker-compose.dev.yml -f docker-compose.yml up -d web_dev && sleep 100

View File

@@ -42,7 +42,7 @@ jobs:
source .env source .env
docker-compose stop || true docker-compose stop || true
docker-compose rm || true docker-compose rm || true
docker-compose up -d db && sleep 15 docker-compose up -d db && sleep 30
docker-compose ps -a docker-compose ps -a
echo "Test PostgSail Grafana Unit Test" echo "Test PostgSail Grafana Unit Test"
docker-compose up -d app && sleep 5 docker-compose up -d app && sleep 5

8
.gitignore vendored
View File

@@ -1,2 +1,10 @@
.DS_Store .DS_Store
.env .env
initdb/*.csv
initdb/*.no
initdb/*.jwk
tests/node_modules/
tests/output/
assets/*
.pnpm-store/
db-data/

22
.schemalintrc.js Normal file
View File

@@ -0,0 +1,22 @@
module.exports = {
connection: {
host: process.env.PGHOST,
user: process.env.PGUSER,
password: process.env.PGPASSWORD,
database: process.env.PGDATABASE,
charset: "utf8",
},
rules: {
"name-casing": ["error", "snake"],
"prefer-jsonb-to-json": ["error"],
"prefer-text-to-varchar": ["error"],
"prefer-timestamptz-to-timestamp": ["error"],
"prefer-identity-to-serial": ["error"],
"name-inflection": ["error", "singular"],
},
schemas: [{ name: "public" }, { name: "api" }],
ignores: [],
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 222 KiB

After

Width:  |  Height:  |  Size: 222 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 195 KiB

View File

@@ -23,18 +23,18 @@ postgsail-telegram-bot:
- Automatically log your voyages without manually starting or stopping a trip. - Automatically log your voyages without manually starting or stopping a trip.
- Automatically capture the details of your voyages (boat speed, heading, wind speed, etc). - Automatically capture the details of your voyages (boat speed, heading, wind speed, etc).
- Timelapse video your trips! - Timelapse video your trips, with or without time control.
- Add custom notes to your logs. - Add custom notes to your logs.
- Export to CSV or GPX and download your logs. - Export to CSV or GPX or KLM and download your logs.
- Aggregate your trip statistics: Longest voyage, time spent at anchorages, home ports etc. - Aggregate your trip statistics: Longest voyage, time spent at anchorages, home ports etc.
- See your moorages on a global map, with incoming and outgoing voyages from each trip. - See your moorages on a global map, with incoming and outgoing voyages from each trip.
- Monitor your boat (position, depth, wind, temperature, battery charge status, etc.) remotely. - Monitor your boat (position, depth, wind, temperature, battery charge status, etc.) remotely.
- History: view trends. - History: view trends.
- Alert monitoring: get notification on low voltage or low fuel remotely. - Alert monitoring: get notification on low voltage or low fuel remotely.
- Notification via email or PushOver, Telegram - Notification via email or PushOver, Telegram.
- Offline mode - Offline mode.
- Low Bandwidth mode - Low Bandwidth mode.
- Awesome statistics and graphs - Awesome statistics and graphs.
- Anything missing? just ask! - Anything missing? just ask!
## Context ## Context
@@ -96,12 +96,12 @@ Notice, that `PGRST_JWT_SECRET` must be at least 32 characters long.
### Deploy ### Deploy
By default there is no network set and the postgresql data are store in a docker volume. By default there is no network set and all data are store in a docker volume.
You can update the default settings by editing `docker-compose.yml` to your need. You can update the default settings by editing `docker-compose.yml` and `docker-compose.dev.yml` to your need.
First let's initialize the database. First let's initialize the database.
#### Initialize database #### Step 1. Initialize database
First let's import the SQL schema, execute: First let's import the SQL schema, execute:
@@ -109,7 +109,7 @@ First let's import the SQL schema, execute:
$ docker-compose up db $ docker-compose up db
``` ```
#### Start backend (db, api) #### Step 2. Start backend (db, api)
Then launch the full stack (db, api) backend, execute: Then launch the full stack (db, api) backend, execute:
@@ -147,7 +147,8 @@ You might want to import your influxdb1 data as well, [outflux](https://github.c
Any taker on influxdb2 to PostgSail? It is definitely possible. Any taker on influxdb2 to PostgSail? It is definitely possible.
Last, if you like, you can import the sample data from Signalk NMEA Plaka by running the tests. Last, if you like, you can import the sample data from Signalk NMEA Plaka by running the tests.
If everything goes well all tests pass successfully and you should receive a few notifications by email or PushOver. If everything goes well all tests pass successfully and you should receive a few notifications by email or PushOver or Telegram.
[End-to-End (E2E) Testing.](https://github.com/xbgmsharp/postgsail/blob/main/tests/)
``` ```
$ docker-compose up tests $ docker-compose up tests
@@ -179,7 +180,7 @@ $ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_register_v
#### API main workflow #### API main workflow
Check the [e2e unit test sample](https://github.com/xbgmsharp/postgsail/blob/main/tests/). Check the [End-to-End (E2E) test sample](https://github.com/xbgmsharp/postgsail/blob/main/tests/).
### Docker dependencies ### Docker dependencies
@@ -208,10 +209,6 @@ Out of the box iot platform using docker with the following software:
- [PostGIS, a spatial database extender for PostgreSQL object-relational database.](https://postgis.net/) - [PostGIS, a spatial database extender for PostgreSQL object-relational database.](https://postgis.net/)
- [Grafana, open observability platform | Grafana Labs](https://grafana.com) - [Grafana, open observability platform | Grafana Labs](https://grafana.com)
### Releases & updates
PostgSail Release Notes & Future Plans: see planned and in-progress updates and detailed information about current and past releases. [PostgSail project](https://github.com/xbgmsharp?tab=projects)
### Support ### Support
To get support, please create new [issue](https://github.com/xbgmsharp/postgsail/issues). To get support, please create new [issue](https://github.com/xbgmsharp/postgsail/issues).

View File

@@ -20,8 +20,21 @@
"editable": true, "editable": true,
"fiscalYearStartMonth": 0, "fiscalYearStartMonth": 0,
"graphTooltip": 0, "graphTooltip": 0,
"id": 6, "id": 1,
"links": [], "links": [
{
"asDropdown": false,
"icon": "external link",
"includeVars": true,
"keepTime": false,
"tags": [],
"targetBlank": true,
"title": "New link",
"tooltip": "",
"type": "dashboards",
"url": ""
}
],
"liveNow": false, "liveNow": false,
"panels": [ "panels": [
{ {
@@ -83,7 +96,7 @@
"showThresholdLabels": false, "showThresholdLabels": false,
"showThresholdMarkers": true "showThresholdMarkers": true
}, },
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -220,7 +233,7 @@
}, },
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -346,7 +359,7 @@
}, },
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -481,7 +494,7 @@
}, },
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -591,7 +604,7 @@
}, },
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -719,7 +732,7 @@
}, },
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -766,50 +779,108 @@
"type": "stat" "type": "stat"
}, },
{ {
"aliasColors": {
"electrical.batteries.256.current.mean": "blue"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": { "datasource": {
"type": "postgres", "type": "postgres",
"uid": "PCC52D03280B7034C" "uid": "PCC52D03280B7034C"
}, },
"fill": 1, "fieldConfig": {
"fillGradient": 0, "defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "line+area"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "transparent",
"value": null
},
{
"color": "red",
"value": -1
},
{
"color": "red",
"value": 1
}
]
},
"unit": "amp"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "electrical.batteries.256.current.mean"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": { "gridPos": {
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 5 "y": 5
}, },
"hiddenSeries": false,
"id": 47, "id": 47,
"legend": {
"avg": true,
"current": false,
"max": true,
"min": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": { "options": {
"alertThreshold": true "legend": {
"calcs": [
"mean",
"max",
"min"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
}, },
"percentage": false, "tooltip": {
"pluginVersion": "9.5.1", "mode": "multi",
"pointradius": 2, "sort": "none"
"points": false, }
"renderer": "flot", },
"seriesOverrides": [], "pluginVersion": "10.1.0",
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -835,7 +906,8 @@
"measurement": "electrical.batteries.256.current", "measurement": "electrical.batteries.256.current",
"orderByTime": "ASC", "orderByTime": "ASC",
"policy": "default", "policy": "default",
"rawSql": "", "rawQuery": true,
"rawSql": "SET vessel.id = '${__user.login}';\nSELECT m.time, cast(m.metrics->'electrical.batteries.House.current' as NUMERIC) as current FROM api.metrics m WHERE $__timeFilter(time) AND m.vessel_id = '${boat}';\n",
"refId": "A", "refId": "A",
"resultFormat": "time_series", "resultFormat": "time_series",
"select": [ "select": [
@@ -872,56 +944,379 @@
"tags": [] "tags": []
} }
], ],
"thresholds": [
{
"$$hashKey": "object:8288",
"colorMode": "critical",
"fill": true,
"line": true,
"op": "gt",
"value": -1,
"yaxis": "left"
},
{
"$$hashKey": "object:8294",
"colorMode": "ok",
"fill": true,
"line": true,
"op": "gt",
"value": 1,
"yaxis": "left"
}
],
"timeRegions": [],
"title": "House Amps", "title": "House Amps",
"type": "timeseries"
},
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 10,
"x": 12,
"y": 5
},
"id": 48,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": { "tooltip": {
"shared": true, "mode": "single",
"sort": 0, "sort": "none"
"value_type": "individual" }
}, },
"type": "graph", "targets": [
"xaxis": {
"mode": "time",
"show": true,
"values": []
},
"yaxes": [
{ {
"$$hashKey": "object:8148", "datasource": {
"format": "amp", "type": "postgres",
"logBase": 1, "uid": "PCC52D03280B7034C"
"show": true
}, },
"editorMode": "code",
"format": "table",
"rawQuery": true,
"rawSql": "SET vessel.id = '${__user.login}';\nSELECT m.time, cast(m.metrics->'electrical.batteries.House.capacity.stateOfCharge' as NUMERIC) * 100 as stateOfCharge FROM api.metrics m WHERE $__timeFilter(time) AND m.vessel_id = '${boat}';\n",
"refId": "A",
"sql": {
"columns": [
{ {
"$$hashKey": "object:8149", "parameters": [],
"format": "short", "type": "function"
"logBase": 1,
"show": true
} }
], ],
"yaxis": { "groupBy": [
"align": false {
"property": {
"type": "string"
},
"type": "groupBy"
} }
],
"limit": 50
}
}
],
"title": "System - Battery SOC (State of Charge)",
"type": "timeseries"
},
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "Volts",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "volt"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "current"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "voltage"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "yellow",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "current"
},
"properties": [
{
"id": "unit",
"value": "amp"
},
{
"id": "custom.axisLabel",
"value": "Amps"
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 13
},
"id": 37,
"options": {
"legend": {
"calcs": [
"mean",
"max",
"min"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "none"
}
},
"pluginVersion": "10.1.0",
"targets": [
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"editorMode": "code",
"format": "table",
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "electrical.batteries.256.voltage",
"orderByTime": "ASC",
"policy": "default",
"rawQuery": true,
"rawSql": "SET vessel.id = '${__user.login}';\nSELECT m.time, cast(m.metrics->'electrical.batteries.House.voltage' as NUMERIC) as voltage FROM api.metrics m WHERE $__timeFilter(time) AND m.vessel_id = '${boat}';\n",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
},
"tags": []
},
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"editorMode": "code",
"format": "table",
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"hide": false,
"measurement": "electrical.batteries.256.current",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"value\") FROM \"electrical.batteries.256.current\" WHERE $timeFilter GROUP BY time($__interval) fill(null)",
"rawQuery": true,
"rawSql": "SET vessel.id = '${__user.login}';\nSELECT m.time, cast(m.metrics->'electrical.batteries.House.current' as NUMERIC) as current FROM api.metrics m WHERE $__timeFilter(time) AND m.vessel_id = '${boat}';\n",
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
},
"tags": []
}
],
"title": "Battery Voltage and Current",
"type": "timeseries"
}, },
{ {
"aliasColors": { "aliasColors": {
@@ -938,9 +1333,9 @@
"fillGradient": 0, "fillGradient": 0,
"gridPos": { "gridPos": {
"h": 8, "h": 8,
"w": 12, "w": 10,
"x": 12, "x": 12,
"y": 5 "y": 13
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 45, "id": 45,
@@ -960,7 +1355,7 @@
"alertThreshold": true "alertThreshold": true
}, },
"percentage": false, "percentage": false,
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"pointradius": 2, "pointradius": 2,
"points": false, "points": false,
"renderer": "flot", "renderer": "flot",
@@ -1042,180 +1437,6 @@
"align": false "align": false
} }
}, },
{
"aliasColors": {
"electrical.batteries.256.current.mean": "blue",
"electrical.batteries.256.voltage.mean": "yellow"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"description": "",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 13
},
"hiddenSeries": false,
"id": 37,
"legend": {
"alignAsTable": false,
"avg": true,
"current": false,
"max": true,
"min": true,
"rightSide": false,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "9.5.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [
{
"$$hashKey": "object:5017",
"alias": "electrical.batteries.256.current.mean",
"yaxis": 2
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "electrical.batteries.256.voltage",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
},
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"hide": false,
"measurement": "electrical.batteries.256.current",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"value\") FROM \"electrical.batteries.256.current\" WHERE $timeFilter GROUP BY time($__interval) fill(null)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
}
],
"thresholds": [],
"timeRegions": [],
"title": "House Bank Voltage vs Current",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"mode": "time",
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:4372",
"format": "volt",
"label": "Volts",
"logBase": 1,
"show": true
},
{
"$$hashKey": "object:4373",
"format": "amp",
"label": "Amps",
"logBase": 1,
"show": true
}
],
"yaxis": {
"align": false
}
},
{ {
"aliasColors": { "aliasColors": {
"From grid": "#1f78c1", "From grid": "#1f78c1",
@@ -1232,9 +1453,9 @@
"fillGradient": 0, "fillGradient": 0,
"gridPos": { "gridPos": {
"h": 8, "h": 8,
"w": 12, "w": 10,
"x": 12, "x": 12,
"y": 13 "y": 21
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 10, "id": 10,
@@ -1258,7 +1479,7 @@
}, },
"paceLength": 10, "paceLength": 10,
"percentage": false, "percentage": false,
"pluginVersion": "9.5.1", "pluginVersion": "10.1.0",
"pointradius": 2, "pointradius": 2,
"points": false, "points": false,
"renderer": "flot", "renderer": "flot",
@@ -1416,6 +1637,25 @@
"skipUrlSync": false, "skipUrlSync": false,
"sort": 0, "sort": 0,
"type": "query" "type": "query"
},
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"definition": "SET vessel.id = '${__user.login}';\nSELECT rtrim(key, 'voltage') AS __text ,key AS __value FROM api.monitoring_view2 where key ILIKE 'electrical.batteries%voltage';",
"hide": 0,
"includeAll": false,
"label": "Batteries",
"multi": false,
"name": "batteries",
"options": [],
"query": "SET vessel.id = '${__user.login}';\nSELECT rtrim(key, 'voltage') AS __text ,key AS __value FROM api.monitoring_view2 where key ILIKE 'electrical.batteries%voltage';",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
} }
] ]
}, },
@@ -1447,9 +1687,9 @@
"30d" "30d"
] ]
}, },
"timezone": "", "timezone": "utc",
"title": "Electrical System", "title": "Electrical System",
"uid": "rk0FTiIMk", "uid": "rk0FTiIMk",
"version": 1, "version": 11,
"weekStart": "" "weekStart": ""
} }

View File

@@ -25,7 +25,7 @@
"editable": true, "editable": true,
"fiscalYearStartMonth": 0, "fiscalYearStartMonth": 0,
"graphTooltip": 0, "graphTooltip": 0,
"id": 2, "id": 3,
"links": [ "links": [
{ {
"asDropdown": false, "asDropdown": false,
@@ -92,7 +92,7 @@
"text": {}, "text": {},
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.4.3", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -198,7 +198,7 @@
"text": {}, "text": {},
"textMode": "auto" "textMode": "auto"
}, },
"pluginVersion": "9.4.3", "pluginVersion": "10.1.0",
"targets": [ "targets": [
{ {
"datasource": { "datasource": {
@@ -279,6 +279,7 @@
"tooltip": false, "tooltip": false,
"viz": false "viz": false
}, },
"insertNulls": false,
"lineInterpolation": "linear", "lineInterpolation": "linear",
"lineWidth": 1, "lineWidth": 1,
"pointSize": 5, "pointSize": 5,
@@ -439,6 +440,7 @@
"tooltip": false, "tooltip": false,
"viz": false "viz": false
}, },
"insertNulls": false,
"lineInterpolation": "linear", "lineInterpolation": "linear",
"lineWidth": 1, "lineWidth": 1,
"pointSize": 5, "pointSize": 5,
@@ -573,6 +575,7 @@
"tooltip": false, "tooltip": false,
"viz": false "viz": false
}, },
"insertNulls": false,
"lineInterpolation": "linear", "lineInterpolation": "linear",
"lineWidth": 1, "lineWidth": 1,
"pointSize": 5, "pointSize": 5,
@@ -638,7 +641,7 @@
"group": [], "group": [],
"metricColumn": "none", "metricColumn": "none",
"rawQuery": true, "rawQuery": true,
"rawSql": "SET vessel.id = '${__user.login}';\nwith config as (select set_config('vessel.id', '${boat}', false) ) select * from api.monitoring_view", "rawSql": "SET vessel.id = '${__user.login}';\nselect * from api.monitoring_humidity;\n",
"refId": "A", "refId": "A",
"select": [ "select": [
[ [
@@ -679,11 +682,11 @@
] ]
} }
], ],
"title": "Title", "title": "environment.%.humidity",
"type": "timeseries" "type": "timeseries"
} }
], ],
"refresh": "", "refresh": "5m",
"revision": 1, "revision": 1,
"schemaVersion": 38, "schemaVersion": 38,
"style": "dark", "style": "dark",

File diff suppressed because it is too large Load Diff

View File

@@ -1936,7 +1936,7 @@
"yBucketBound": "auto" "yBucketBound": "auto"
} }
], ],
"refresh": "1m", "refresh": "5m",
"schemaVersion": 37, "schemaVersion": 37,
"style": "dark", "style": "dark",
"tags": [], "tags": [],

View File

@@ -24,7 +24,21 @@
"editable": true, "editable": true,
"fiscalYearStartMonth": 0, "fiscalYearStartMonth": 0,
"graphTooltip": 0, "graphTooltip": 0,
"links": [], "id": 5,
"links": [
{
"asDropdown": false,
"icon": "external link",
"includeVars": true,
"keepTime": false,
"tags": [],
"targetBlank": true,
"title": "New link",
"tooltip": "",
"type": "dashboards",
"url": ""
}
],
"liveNow": false, "liveNow": false,
"panels": [ "panels": [
{ {
@@ -33,38 +47,17 @@
"uid": "OIttR1sVk" "uid": "OIttR1sVk"
}, },
"gridPos": { "gridPos": {
"h": 3, "h": 13,
"w": 24, "w": 10,
"x": 0, "x": 0,
"y": 0 "y": 0
}, },
"id": 1,
"targets": [
{
"datasource": {
"type": "postgres",
"uid": "OIttR1sVk"
},
"refId": "A"
}
],
"type": "welcome"
},
{
"datasource": {
"type": "postgres",
"uid": "OIttR1sVk"
},
"gridPos": {
"h": 12,
"w": 24,
"x": 0,
"y": 3
},
"id": 3, "id": 3,
"links": [], "links": [],
"options": { "options": {
"folderId": 0, "folderId": 0,
"includeVars": false,
"keepTime": false,
"maxItems": 30, "maxItems": 30,
"query": "", "query": "",
"showHeadings": true, "showHeadings": true,
@@ -73,7 +66,7 @@
"showStarred": true, "showStarred": true,
"tags": [] "tags": []
}, },
"pluginVersion": "9.4.3", "pluginVersion": "10.1.4",
"tags": [], "tags": [],
"targets": [ "targets": [
{ {
@@ -84,8 +77,156 @@
"refId": "A" "refId": "A"
} }
], ],
"title": "Dashboards", "title": "PostgSail Dashboards",
"type": "dashlist" "type": "dashlist"
},
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 13,
"w": 12,
"x": 10,
"y": 0
},
"id": 5,
"maxDataPoints": 500,
"options": {
"basemap": {
"config": {},
"name": "Layer 0",
"type": "default"
},
"controls": {
"mouseWheelZoom": true,
"showAttribution": true,
"showDebug": false,
"showMeasure": false,
"showScale": false,
"showZoom": true
},
"layers": [
{
"config": {
"showLegend": true,
"style": {
"color": {
"fixed": "dark-green"
},
"opacity": 0.4,
"rotation": {
"fixed": 0,
"max": 360,
"min": -360,
"mode": "mod"
},
"size": {
"fixed": 5,
"max": 15,
"min": 2
},
"symbol": {
"fixed": "img/icons/marker/circle.svg",
"mode": "fixed"
},
"textConfig": {
"fontSize": 12,
"offsetX": 0,
"offsetY": 0,
"textAlign": "center",
"textBaseline": "middle"
}
}
},
"filterData": {
"id": "byRefId",
"options": "A"
},
"location": {
"latitude": "value",
"longitude": "value",
"mode": "auto"
},
"name": "Boat",
"tooltip": true,
"type": "markers"
}
],
"tooltip": {
"mode": "details"
},
"view": {
"allLayers": true,
"id": "fit",
"lat": 0,
"lon": 0,
"zoom": 5
}
},
"pluginVersion": "10.1.4",
"targets": [
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"editorMode": "code",
"format": "table",
"rawQuery": true,
"rawSql": "SELECT latitude, longitude FROM api.metrics WHERE vessel_id = '${boat}' ORDER BY time ASC LIMIT 1;",
"refId": "A",
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
}
}
],
"title": "Location",
"type": "geomap"
} }
], ],
"refresh": "", "refresh": "",
@@ -94,10 +235,31 @@
"style": "dark", "style": "dark",
"tags": [], "tags": [],
"templating": { "templating": {
"list": [] "list": [
{
"datasource": {
"type": "postgres",
"uid": "PCC52D03280B7034C"
},
"definition": "SET \"user.email\" = '${__user.email}';\nSET vessel.id = '${__user.login}';\nSELECT\n v.name AS __text,\n m.vessel_id AS __value\n FROM auth.vessels v\n JOIN api.metadata m ON v.owner_email = '${__user.email}' and m.vessel_id = v.vessel_id;",
"description": "Vessel Name",
"hide": 0,
"includeAll": false,
"label": "Boat",
"multi": false,
"name": "boat",
"options": [],
"query": "SET \"user.email\" = '${__user.email}';\nSET vessel.id = '${__user.login}';\nSELECT\n v.name AS __text,\n m.vessel_id AS __value\n FROM auth.vessels v\n JOIN api.metadata m ON v.owner_email = '${__user.email}' and m.vessel_id = v.vessel_id;",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
}
]
}, },
"time": { "time": {
"from": "now-6h", "from": "now-90d",
"to": "now" "to": "now"
}, },
"timepicker": { "timepicker": {
@@ -129,6 +291,7 @@
}, },
"timezone": "browser", "timezone": "browser",
"title": "Home", "title": "Home",
"version": 0, "uid": "d81aa15b",
"version": 1,
"weekStart": "" "weekStart": ""
} }

View File

@@ -7,6 +7,7 @@ auto_assign_org_role = Editor
enabled = true enabled = true
header_name = X-WEBAUTH-USER header_name = X-WEBAUTH-USER
header_property = email header_property = email
headers = Login:X-WEBAUTH-LOGIN
auto_sign_up = true auto_sign_up = true
enable_login_token = true enable_login_token = true
login_maximum_inactive_lifetime_duration = 12h login_maximum_inactive_lifetime_duration = 12h
@@ -14,3 +15,7 @@ login_maximum_lifetime_duration = 1d
[dashboards] [dashboards]
default_home_dashboard_path = /etc/grafana/dashboards/home.json default_home_dashboard_path = /etc/grafana/dashboards/home.json
[analytics]
feedback_links_enabled = false
reporting_enabled = false

View File

@@ -51,6 +51,10 @@ CREATE DATABASE signalk;
ALTER DATABASE signalk WITH CONNECTION LIMIT = 100; ALTER DATABASE signalk WITH CONNECTION LIMIT = 100;
-- Set timezone to UTC -- Set timezone to UTC
ALTER DATABASE signalk SET TIMEZONE='UTC'; ALTER DATABASE signalk SET TIMEZONE='UTC';
-- Set datestyle output
ALTER DATABASE signalk SET datestyle TO "ISO, DMY";
-- Set intervalstyle output
ALTER DATABASE signalk SET intervalstyle TO 'iso_8601';
-- connect to the DB -- connect to the DB
\c signalk \c signalk

View File

@@ -43,7 +43,6 @@ CREATE TYPE status AS ENUM ('sailing', 'motoring', 'moored', 'anchored');
-- Table api.metrics -- Table api.metrics
CREATE TABLE IF NOT EXISTS api.metrics ( CREATE TABLE IF NOT EXISTS api.metrics (
time TIMESTAMP WITHOUT TIME ZONE NOT NULL, time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
--client_id VARCHAR(255) NOT NULL REFERENCES api.metadata(client_id) ON DELETE RESTRICT,
client_id TEXT NULL, client_id TEXT NULL,
vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT, vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT,
latitude DOUBLE PRECISION NULL, latitude DOUBLE PRECISION NULL,
@@ -55,8 +54,8 @@ CREATE TABLE IF NOT EXISTS api.metrics (
status status NULL, status status NULL,
metrics jsonb NULL, metrics jsonb NULL,
--CONSTRAINT valid_client_id CHECK (length(client_id) > 10), --CONSTRAINT valid_client_id CHECK (length(client_id) > 10),
CONSTRAINT valid_latitude CHECK (latitude >= -90 and latitude <= 90), --CONSTRAINT valid_latitude CHECK (latitude >= -90 and latitude <= 90),
CONSTRAINT valid_longitude CHECK (longitude >= -180 and longitude <= 180), --CONSTRAINT valid_longitude CHECK (longitude >= -180 and longitude <= 180),
PRIMARY KEY (time, vessel_id) PRIMARY KEY (time, vessel_id)
); );
-- Description -- Description
@@ -97,22 +96,19 @@ SELECT create_hypertable('api.metrics', 'time', chunk_time_interval => INTERVAL
CREATE TABLE IF NOT EXISTS api.logbook( CREATE TABLE IF NOT EXISTS api.logbook(
id SERIAL PRIMARY KEY, id SERIAL PRIMARY KEY,
--client_id VARCHAR(255) NOT NULL REFERENCES api.metadata(client_id) ON DELETE RESTRICT,
--client_id VARCHAR(255) NULL,
vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT, vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT,
active BOOLEAN DEFAULT false, active BOOLEAN DEFAULT false,
name VARCHAR(255), name TEXT,
_from VARCHAR(255), _from TEXT,
_from_lat DOUBLE PRECISION NULL, _from_lat DOUBLE PRECISION NULL,
_from_lng DOUBLE PRECISION NULL, _from_lng DOUBLE PRECISION NULL,
_to VARCHAR(255), _to TEXT,
_to_lat DOUBLE PRECISION NULL, _to_lat DOUBLE PRECISION NULL,
_to_lng DOUBLE PRECISION NULL, _to_lng DOUBLE PRECISION NULL,
--track_geom Geometry(LINESTRING) --track_geom Geometry(LINESTRING)
track_geom geometry(LINESTRING,4326) NULL, track_geom geometry(LINESTRING,4326) NULL,
track_geog geography(LINESTRING) NULL, track_geog geography(LINESTRING) NULL,
track_geojson JSON NULL, track_geojson JSONB NULL,
track_gpx XML NULL,
_from_time TIMESTAMP WITHOUT TIME ZONE NOT NULL, _from_time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
_to_time TIMESTAMP WITHOUT TIME ZONE NULL, _to_time TIMESTAMP WITHOUT TIME ZONE NULL,
distance NUMERIC, -- meters? distance NUMERIC, -- meters?
@@ -137,19 +133,16 @@ COMMENT ON COLUMN api.logbook.track_geom IS 'postgis geometry type EPSG:4326 Uni
CREATE INDEX ON api.logbook USING GIST ( track_geog ); CREATE INDEX ON api.logbook USING GIST ( track_geog );
COMMENT ON COLUMN api.logbook.track_geog IS 'postgis geography type default SRID 4326 Unit: degres'; COMMENT ON COLUMN api.logbook.track_geog IS 'postgis geography type default SRID 4326 Unit: degres';
-- Otherwise -- ERROR: Only lon/lat coordinate systems are supported in geography. -- Otherwise -- ERROR: Only lon/lat coordinate systems are supported in geography.
COMMENT ON COLUMN api.logbook.track_geojson IS 'store the geojson track metrics data, can not depend api.metrics table, should be generate from linetring to save disk space?'; COMMENT ON COLUMN api.logbook.track_geojson IS 'store generated geojson with track metrics data using with LineString and Point features, we can not depend api.metrics table';
COMMENT ON COLUMN api.logbook.track_gpx IS 'store the gpx track metrics data, can not depend api.metrics table, should be generate from linetring to save disk space?';
--------------------------------------------------------------------------- ---------------------------------------------------------------------------
-- Stays -- Stays
-- virtual logbook by boat? -- virtual logbook by boat?
CREATE TABLE IF NOT EXISTS api.stays( CREATE TABLE IF NOT EXISTS api.stays(
id SERIAL PRIMARY KEY, id SERIAL PRIMARY KEY,
--client_id VARCHAR(255) NOT NULL REFERENCES api.metadata(client_id) ON DELETE RESTRICT,
--client_id VARCHAR(255) NULL,
vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT, vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT,
active BOOLEAN DEFAULT false, active BOOLEAN DEFAULT false,
name VARCHAR(255), name TEXT,
latitude DOUBLE PRECISION NULL, latitude DOUBLE PRECISION NULL,
longitude DOUBLE PRECISION NULL, longitude DOUBLE PRECISION NULL,
geog GEOGRAPHY(POINT) NULL, geog GEOGRAPHY(POINT) NULL,
@@ -179,7 +172,7 @@ CREATE TABLE IF NOT EXISTS api.moorages(
--client_id VARCHAR(255) NULL, --client_id VARCHAR(255) NULL,
vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT, vessel_id TEXT NOT NULL REFERENCES api.metadata(vessel_id) ON DELETE RESTRICT,
name TEXT, name TEXT,
country TEXT, -- todo need to update reverse_geocode_py_fn country TEXT,
stay_id INT NOT NULL, -- needed? stay_id INT NOT NULL, -- needed?
stay_code INT DEFAULT 1, -- needed? REFERENCES api.stays_at(stay_code) stay_code INT DEFAULT 1, -- needed? REFERENCES api.stays_at(stay_code)
stay_duration INTERVAL NULL, stay_duration INTERVAL NULL,
@@ -211,7 +204,7 @@ CREATE TABLE IF NOT EXISTS api.stays_at(
COMMENT ON TABLE api.stays_at IS 'Stay Type'; COMMENT ON TABLE api.stays_at IS 'Stay Type';
-- Insert default possible values -- Insert default possible values
INSERT INTO api.stays_at(stay_code, description) VALUES INSERT INTO api.stays_at(stay_code, description) VALUES
(1, 'Unknow'), (1, 'Unknown'),
(2, 'Anchor'), (2, 'Anchor'),
(3, 'Mooring Buoy'), (3, 'Mooring Buoy'),
(4, 'Dock'); (4, 'Dock');
@@ -357,13 +350,37 @@ CREATE FUNCTION metrics_trigger_fn() RETURNS trigger AS $metrics$
END IF; END IF;
IF previous_time > NEW.time THEN IF previous_time > NEW.time THEN
-- Ignore entry if new time is later than previous time -- Ignore entry if new time is later than previous time
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], new time is older [%] > [%]', NEW.vessel_id, previous_time, NEW.time; RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], new time is older than previous_time [%] > [%]', NEW.vessel_id, previous_time, NEW.time;
RETURN NULL; RETURN NULL;
END IF; END IF;
-- Check if latitude or longitude are type double
--IF public.isdouble(NEW.latitude::TEXT) IS False OR public.isdouble(NEW.longitude::TEXT) IS False THEN
-- -- Ignore entry if null latitude,longitude
-- RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], not a double type for latitude or longitude [%] [%]', NEW.vessel_id, NEW.latitude, NEW.longitude;
-- RETURN NULL;
--END IF;
-- Check if latitude or longitude are null -- Check if latitude or longitude are null
IF NEW.latitude IS NULL OR NEW.longitude IS NULL THEN IF NEW.latitude IS NULL OR NEW.longitude IS NULL THEN
-- Ignore entry if null latitude,longitude -- Ignore entry if null latitude,longitude
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], null latitude,longitude [%] [%]', NEW.vessel_id, NEW.latitude, NEW.longitude; RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], null latitude or longitude [%] [%]', NEW.vessel_id, NEW.latitude, NEW.longitude;
RETURN NULL;
END IF;
-- Check if valid latitude
IF NEW.latitude >= 90 OR NEW.latitude <= -90 THEN
-- Ignore entry if invalid latitude,longitude
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], invalid latitude >= 90 OR <= -90 [%] [%]', NEW.vessel_id, NEW.latitude, NEW.longitude;
RETURN NULL;
END IF;
-- Check if valid longitude
IF NEW.longitude >= 180 OR NEW.longitude <= -180 THEN
-- Ignore entry if invalid latitude,longitude
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], invalid longitude >= 180 OR <= -180 [%] [%]', NEW.vessel_id, NEW.latitude, NEW.longitude;
RETURN NULL;
END IF;
-- Check if valid longitude and latitude not close to -0.0000001 from Victron Cerbo
IF NEW.latitude = NEW.longitude THEN
-- Ignore entry if latitude,longitude are equal
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], latitude and longitude are equal [%] [%]', NEW.vessel_id, NEW.latitude, NEW.longitude;
RETURN NULL; RETURN NULL;
END IF; END IF;
-- Check if status is null -- Check if status is null
@@ -396,6 +413,12 @@ CREATE FUNCTION metrics_trigger_fn() RETURNS trigger AS $metrics$
RAISE WARNING 'Metrics Ignoring metric, invalid status [%]', NEW.status; RAISE WARNING 'Metrics Ignoring metric, invalid status [%]', NEW.status;
RETURN NULL; RETURN NULL;
END IF; END IF;
-- Check if speedOverGround is valid value
IF NEW.speedoverground >= 40 THEN
-- Ignore entry as speedOverGround is invalid
RAISE WARNING 'Metrics Ignoring metric, vessel_id [%], speedOverGround is invalid, over 40 < [%]', NEW.vessel_id, NEW.speedoverground;
RETURN NULL;
END IF;
-- Check the state and if any previous/current entry -- Check the state and if any previous/current entry
-- If change of state and new status is sailing or motoring -- If change of state and new status is sailing or motoring

View File

@@ -16,47 +16,75 @@ CREATE OR REPLACE FUNCTION api.timelapse_fn(
IN end_log INTEGER DEFAULT NULL, IN end_log INTEGER DEFAULT NULL,
IN start_date TEXT DEFAULT NULL, IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL, IN end_date TEXT DEFAULT NULL,
OUT geojson JSON) RETURNS JSON AS $timelapse$ OUT geojson JSONB) RETURNS JSONB AS $timelapse$
DECLARE DECLARE
_geojson jsonb; _geojson jsonb;
BEGIN BEGIN
-- TODO using jsonb pgsql function instead of python -- Using sub query to force id order by
-- Merge GIS track_geom into a GeoJSON MultiLineString
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(track_geojson->'features') INTO _geojson WITH logbook as (
SELECT track_geom
FROM api.logbook FROM api.logbook
WHERE id >= start_log WHERE id >= start_log
AND id <= end_log AND id <= end_log
AND track_geojson IS NOT NULL; AND track_geom IS NOT NULL
--raise WARNING 'by log _geojson %' , _geojson; GROUP BY id
ORDER BY id ASC
)
SELECT ST_AsGeoJSON(geo.*) INTO _geojson FROM (
SELECT ST_Collect(
ARRAY(
SELECT track_geom FROM logbook))
) as geo;
--raise WARNING 'by log id _geojson %' , _geojson;
ELSIF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN ELSIF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
SELECT jsonb_agg(track_geojson->'features') INTO _geojson WITH logbook as (
SELECT track_geom
FROM api.logbook FROM api.logbook
WHERE _from_time >= start_log::TIMESTAMP WITHOUT TIME ZONE WHERE _from_time >= start_log::TIMESTAMP WITHOUT TIME ZONE
AND _to_time <= end_date::TIMESTAMP WITHOUT TIME ZONE + interval '23 hours 59 minutes' AND _to_time <= end_date::TIMESTAMP WITHOUT TIME ZONE + interval '23 hours 59 minutes'
AND track_geojson IS NOT NULL; AND track_geom IS NOT NULL
GROUP BY id
ORDER BY id ASC
)
SELECT ST_AsGeoJSON(geo.*) INTO _geojson FROM (
SELECT ST_Collect(
ARRAY(
SELECT track_geom FROM logbook))
) as geo;
--raise WARNING 'by date _geojson %' , _geojson; --raise WARNING 'by date _geojson %' , _geojson;
ELSE ELSE
SELECT jsonb_agg(track_geojson->'features') INTO _geojson WITH logbook as (
SELECT track_geom
FROM api.logbook FROM api.logbook
WHERE track_geojson IS NOT NULL; WHERE track_geom IS NOT NULL
GROUP BY id
ORDER BY id ASC
)
SELECT ST_AsGeoJSON(geo.*) INTO _geojson FROM (
SELECT ST_Collect(
ARRAY(
SELECT track_geom FROM logbook))
) as geo;
--raise WARNING 'all result _geojson %' , _geojson; --raise WARNING 'all result _geojson %' , _geojson;
END IF; END IF;
-- Return a GeoJSON filter on Point -- Return a GeoJSON MultiLineString
-- result _geojson [null, null] -- result _geojson [null, null]
--raise WARNING 'result _geojson %' , _geojson; --raise WARNING 'result _geojson %' , _geojson;
SELECT json_build_object( SELECT json_build_object(
'type', 'FeatureCollection', 'type', 'FeatureCollection',
'features', public.geojson_py_fn(_geojson, 'LineString'::TEXT) ) INTO geojson; 'features', ARRAY[_geojson] ) INTO geojson;
END; END;
$timelapse$ LANGUAGE plpgsql; $timelapse$ LANGUAGE plpgsql;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
api.timelapse_fn api.timelapse_fn
IS 'Export to geojson feature point with Time and courseOverGroundTrue properties'; IS 'Export all selected logs geometry `track_geom` to a geojson as MultiLineString with empty properties';
-- export_logbook_geojson_fn -- export_logbook_geojson_fn
DROP FUNCTION IF EXISTS api.export_logbook_geojson_fn; DROP FUNCTION IF EXISTS api.export_logbook_geojson_fn;
CREATE FUNCTION api.export_logbook_geojson_fn(IN _id integer, OUT geojson JSON) RETURNS JSON AS $export_logbook_geojson$ CREATE FUNCTION api.export_logbook_geojson_fn(IN _id integer, OUT geojson JSONB) RETURNS JSONB AS $export_logbook_geojson$
-- validate with geojson.io -- validate with geojson.io
DECLARE DECLARE
logbook_rec record; logbook_rec record;
@@ -80,37 +108,236 @@ $export_logbook_geojson$ LANGUAGE plpgsql;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
api.export_logbook_geojson_fn api.export_logbook_geojson_fn
IS 'Export a log entry to geojson feature linestring and multipoint'; IS 'Export a log entry to geojson with features LineString and Point';
-- Generate GPX XML file output -- Generate GPX XML file output
-- https://opencpn.org/OpenCPN/info/gpxvalidation.html -- https://opencpn.org/OpenCPN/info/gpxvalidation.html
-- --
DROP FUNCTION IF EXISTS api.export_logbook_gpx_fn; DROP FUNCTION IF EXISTS api.export_logbook_gpx_fn;
CREATE OR REPLACE FUNCTION api.export_logbook_gpx_fn(IN _id INTEGER, OUT gpx XML) RETURNS pg_catalog.xml CREATE OR REPLACE FUNCTION api.export_logbook_gpx_fn(IN _id INTEGER) RETURNS pg_catalog.xml
AS $export_logbook_gpx$ AS $export_logbook_gpx2$
DECLARE
app_settings jsonb;
BEGIN
-- If _id is is not NULL and > 0
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> export_logbook_gpx_fn invalid input %', _id;
RETURN '';
END IF;
-- Gather url from app settings
app_settings := get_app_url_fn();
--RAISE DEBUG '-> logbook_update_gpx_fn app_settings %', app_settings;
-- Generate GPX XML, extract Point features from geojson.
RETURN xmlelement(name gpx,
xmlattributes( '1.1' as version,
'PostgSAIL' as creator,
'http://www.topografix.com/GPX/1/1' as xmlns,
'http://www.opencpn.org' as "xmlns:opencpn",
app_settings->>'app.url' as "xmlns:postgsail",
'http://www.w3.org/2001/XMLSchema-instance' as "xmlns:xsi",
'http://www.garmin.com/xmlschemas/GpxExtensions/v3' as "xmlns:gpxx",
'http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd http://www.garmin.com/xmlschemas/GpxExtensions/v3 http://www8.garmin.com/xmlschemas/GpxExtensionsv3.xsd' as "xsi:schemaLocation"),
xmlelement(name metadata,
xmlelement(name link, xmlattributes(app_settings->>'app.url' as href),
xmlelement(name text, 'PostgSail'))),
xmlelement(name trk,
xmlelement(name name, l.name),
xmlelement(name desc, l.notes),
xmlelement(name link, xmlattributes(concat(app_settings->>'app.url', '/log/', l.id) as href),
xmlelement(name text, l.name)),
xmlelement(name extensions, xmlelement(name "postgsail:log_id", l.id),
xmlelement(name "postgsail:link", concat(app_settings->>'app.url', '/log/', l.id)),
xmlelement(name "opencpn:guid", uuid_generate_v4()),
xmlelement(name "opencpn:viz", '1'),
xmlelement(name "opencpn:start", l._from_time),
xmlelement(name "opencpn:end", l._to_time)
),
xmlelement(name trkseg, xmlagg(
xmlelement(name trkpt,
xmlattributes(features->'geometry'->'coordinates'->1 as lat, features->'geometry'->'coordinates'->0 as lon),
xmlelement(name time, features->'properties'->>'time')
)))))::pg_catalog.xml
FROM api.logbook l, jsonb_array_elements(track_geojson->'features') AS features
WHERE features->'geometry'->>'type' = 'Point'
AND l.id = _id
GROUP BY l.name,l.notes,l.id;
END;
$export_logbook_gpx2$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.export_logbook_gpx_fn
IS 'Export a log entry to GPX XML format';
-- Generate KML XML file output
-- https://developers.google.com/kml/documentation/kml_tut
-- TODO https://developers.google.com/kml/documentation/time#timespans
DROP FUNCTION IF EXISTS api.export_logbook_kml_fn;
CREATE OR REPLACE FUNCTION api.export_logbook_kml_fn(IN _id INTEGER) RETURNS pg_catalog.xml
AS $export_logbook_kml$
DECLARE DECLARE
logbook_rec record; logbook_rec record;
BEGIN BEGIN
-- If _id is is not NULL and > 0 -- If _id is is not NULL and > 0
IF _id IS NULL OR _id < 1 THEN IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> export_logbook_gpx_fn invalid input %', _id; RAISE WARNING '-> export_logbook_kml_fn invalid input %', _id;
RETURN; return '';
END IF; END IF;
-- Gather log details -- Gather log details
SELECT * INTO logbook_rec SELECT * INTO logbook_rec
FROM api.logbook WHERE id = _id; FROM api.logbook WHERE id = _id;
-- Ensure the query is successful -- Ensure the query is successful
IF logbook_rec.vessel_id IS NULL THEN IF logbook_rec.vessel_id IS NULL THEN
RAISE WARNING '-> export_logbook_gpx_fn invalid logbook %', _id; RAISE WARNING '-> export_logbook_kml_fn invalid logbook %', _id;
RETURN; return '';
END IF; END IF;
gpx := logbook_rec.track_gpx; -- Extract POINT from LINESTRING to generate KML XML
RETURN xmlelement(name kml,
xmlattributes( '1.0' as version,
'PostgSAIL' as creator,
'http://www.w3.org/2005/Atom' as "xmlns:atom",
'http://www.opengis.net/kml/2.2' as "xmlns",
'http://www.google.com/kml/ext/2.2' as "xmlns:gx",
'http://www.opengis.net/kml/2.2' as "xmlns:kml"),
xmlelement(name "Document",
xmlelement(name name, logbook_rec.name),
xmlelement(name "Placemark",
xmlelement(name name, logbook_rec.notes),
ST_AsKML(logbook_rec.track_geom)::pg_catalog.xml)
))::pg_catalog.xml
FROM api.logbook WHERE id = _id;
END; END;
$export_logbook_gpx$ LANGUAGE plpgsql; $export_logbook_kml$ LANGUAGE plpgsql;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
api.export_logbook_gpx_fn api.export_logbook_kml_fn
IS 'Export a log entry to GPX XML format'; IS 'Export a log entry to KML XML format';
DROP FUNCTION IF EXISTS api.export_logbooks_gpx_fn;
CREATE OR REPLACE FUNCTION api.export_logbooks_gpx_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL) RETURNS pg_catalog.xml
AS $export_logbooks_gpx$
declare
merged_jsonb jsonb;
app_settings jsonb;
BEGIN
-- Merge GIS track_geom of geometry type Point into a jsonb array format
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
SELECT jsonb_agg(
jsonb_build_object('coordinates', f->'geometry'->'coordinates', 'time', f->'properties'->>'time')
) INTO merged_jsonb
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE id >= start_log
AND id <= end_log
AND track_geojson IS NOT NULL
GROUP BY id
ORDER BY id ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
ELSE
SELECT jsonb_agg(
jsonb_build_object('coordinates', f->'geometry'->'coordinates', 'time', f->'properties'->>'time')
) INTO merged_jsonb
FROM (
SELECT jsonb_array_elements(track_geojson->'features') AS f
FROM api.logbook
WHERE track_geojson IS NOT NULL
GROUP BY id
ORDER BY id ASC
) AS sub
WHERE (f->'geometry'->>'type') = 'Point';
END IF;
--RAISE WARNING '-> export_logbooks_gpx_fn _jsonb %' , _jsonb;
-- Gather url from app settings
app_settings := get_app_url_fn();
--RAISE WARNING '-> export_logbooks_gpx_fn app_settings %', app_settings;
-- Generate GPX XML, extract Point features from geojson.
RETURN xmlelement(name gpx,
xmlattributes( '1.1' as version,
'PostgSAIL' as creator,
'http://www.topografix.com/GPX/1/1' as xmlns,
'http://www.opencpn.org' as "xmlns:opencpn",
app_settings->>'app.url' as "xmlns:postgsail"),
xmlelement(name metadata,
xmlelement(name link, xmlattributes(app_settings->>'app.url' as href),
xmlelement(name text, 'PostgSail'))),
xmlelement(name trk,
xmlelement(name name, 'logbook name'),
xmlelement(name trkseg, xmlagg(
xmlelement(name trkpt,
xmlattributes(features->'coordinates'->1 as lat, features->'coordinates'->0 as lon),
xmlelement(name time, features->'properties'->>'time')
)))))::pg_catalog.xml
FROM jsonb_array_elements(merged_jsonb) AS features;
END;
$export_logbooks_gpx$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.export_logbooks_gpx_fn
IS 'Export a logs entries to GPX XML format';
DROP FUNCTION IF EXISTS api.export_logbooks_kml_fn;
CREATE OR REPLACE FUNCTION api.export_logbooks_kml_fn(
IN start_log INTEGER DEFAULT NULL,
IN end_log INTEGER DEFAULT NULL) RETURNS pg_catalog.xml
AS $export_logbooks_kml$
DECLARE
_geom geometry;
app_settings jsonb;
BEGIN
-- Merge GIS track_geom into a GeoJSON MultiLineString
IF start_log IS NOT NULL AND public.isnumeric(start_log::text) AND public.isnumeric(end_log::text) THEN
WITH logbook as (
SELECT track_geom
FROM api.logbook
WHERE id >= start_log
AND id <= end_log
AND track_geom IS NOT NULL
GROUP BY id
ORDER BY id ASC
)
SELECT ST_Collect(
ARRAY(
SELECT track_geom FROM logbook))
into _geom;
ELSE
WITH logbook as (
SELECT track_geom
FROM api.logbook
WHERE track_geom IS NOT NULL
GROUP BY id
ORDER BY id ASC
)
SELECT ST_Collect(
ARRAY(
SELECT track_geom FROM logbook))
into _geom;
--raise WARNING 'all result _geojson %' , _geojson;
END IF;
-- Extract POINT from LINESTRING to generate KML XML
RETURN xmlelement(name kml,
xmlattributes( '1.0' as version,
'PostgSAIL' as creator,
'http://www.w3.org/2005/Atom' as "xmlns:atom",
'http://www.opengis.net/kml/2.2' as "xmlns",
'http://www.google.com/kml/ext/2.2' as "xmlns:gx",
'http://www.opengis.net/kml/2.2' as "xmlns:kml"),
xmlelement(name "Document",
xmlelement(name name, 'logbook name'),
xmlelement(name "Placemark",
ST_AsKML(_geom)::pg_catalog.xml
)
)
)::pg_catalog.xml;
END;
$export_logbooks_kml$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.export_logbooks_kml_fn
IS 'Export a logs entries to KML XML format';
-- Find all log from and to moorage geopoint within 100m -- Find all log from and to moorage geopoint within 100m
DROP FUNCTION IF EXISTS api.find_log_from_moorage_fn; DROP FUNCTION IF EXISTS api.find_log_from_moorage_fn;
@@ -278,6 +505,32 @@ COMMENT ON FUNCTION
api.logs_by_month_fn api.logs_by_month_fn
IS 'logbook by month for web charts'; IS 'logbook by month for web charts';
-- logs_by_day_fn
DROP FUNCTION IF EXISTS api.logs_by_day_fn;
CREATE FUNCTION api.logs_by_day_fn(OUT charts JSONB) RETURNS JSONB AS $logs_by_day$
DECLARE
data JSONB;
BEGIN
-- Query logs by day
SELECT json_object_agg(day,count) INTO data
FROM (
SELECT
to_char(date_trunc('day', _from_time), 'D') as day,
count(*) as count
FROM api.logbook
GROUP BY day
ORDER BY day
) AS t;
-- Merge jsonb to get all 7 days
SELECT '{"01": 0, "02": 0, "03": 0, "04": 0, "05": 0, "06": 0, "07": 0}'::jsonb ||
data::jsonb INTO charts;
END;
$logs_by_day$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.logs_by_day_fn
IS 'logbook by day for web charts';
-- moorage_geojson_fn -- moorage_geojson_fn
DROP FUNCTION IF EXISTS api.export_moorages_geojson_fn; DROP FUNCTION IF EXISTS api.export_moorages_geojson_fn;
CREATE FUNCTION api.export_moorages_geojson_fn(OUT geojson JSONB) RETURNS JSONB AS $export_moorages_geojson$ CREATE FUNCTION api.export_moorages_geojson_fn(OUT geojson JSONB) RETURNS JSONB AS $export_moorages_geojson$
@@ -290,7 +543,7 @@ CREATE FUNCTION api.export_moorages_geojson_fn(OUT geojson JSONB) RETURNS JSONB
json_agg(ST_AsGeoJSON(m.*)::JSON) as moorages_geojson json_agg(ST_AsGeoJSON(m.*)::JSON) as moorages_geojson
FROM FROM
( SELECT ( SELECT
id,name, id,name,stay_code,
EXTRACT(DAY FROM justify_hours ( stay_duration )) AS Total_Stay, EXTRACT(DAY FROM justify_hours ( stay_duration )) AS Total_Stay,
geog geog
FROM api.moorages FROM api.moorages
@@ -349,6 +602,7 @@ COMMENT ON FUNCTION
api.export_moorages_gpx_fn api.export_moorages_gpx_fn
IS 'Export moorages as gpx'; IS 'Export moorages as gpx';
----------------------------------------------------------------------------------------------
-- Statistics -- Statistics
DROP FUNCTION IF EXISTS api.stats_logs_fn; DROP FUNCTION IF EXISTS api.stats_logs_fn;
CREATE OR REPLACE FUNCTION api.stats_logs_fn( CREATE OR REPLACE FUNCTION api.stats_logs_fn(
@@ -364,14 +618,22 @@ CREATE OR REPLACE FUNCTION api.stats_logs_fn(
_start_date := start_date::TIMESTAMP WITHOUT TIME ZONE; _start_date := start_date::TIMESTAMP WITHOUT TIME ZONE;
_end_date := end_date::TIMESTAMP WITHOUT TIME ZONE; _end_date := end_date::TIMESTAMP WITHOUT TIME ZONE;
END IF; END IF;
RAISE WARNING '--> stats_fn, _start_date [%], _end_date [%]', _start_date, _end_date; RAISE NOTICE '--> stats_fn, _start_date [%], _end_date [%]', _start_date, _end_date;
WITH WITH
meta AS (
SELECT m.name FROM api.metadata m ),
logs_view AS ( logs_view AS (
SELECT * SELECT *
FROM api.logbook l FROM api.logbook l
WHERE _from_time >= _start_date::TIMESTAMP WITHOUT TIME ZONE WHERE _from_time >= _start_date::TIMESTAMP WITHOUT TIME ZONE
AND _to_time <= _end_date::TIMESTAMP WITHOUT TIME ZONE + interval '23 hours 59 minutes' AND _to_time <= _end_date::TIMESTAMP WITHOUT TIME ZONE + interval '23 hours 59 minutes'
), ),
first_date AS (
SELECT _from_time as first_date from logs_view ORDER BY first_date ASC LIMIT 1
),
last_date AS (
SELECT _to_time as last_date from logs_view ORDER BY _to_time DESC LIMIT 1
),
max_speed_id AS ( max_speed_id AS (
SELECT id FROM logs_view WHERE max_speed = (SELECT max(max_speed) FROM logs_view) ), SELECT id FROM logs_view WHERE max_speed = (SELECT max(max_speed) FROM logs_view) ),
max_wind_speed_id AS ( max_wind_speed_id AS (
@@ -386,16 +648,22 @@ CREATE OR REPLACE FUNCTION api.stats_logs_fn(
max(max_speed) AS max_speed, max(max_speed) AS max_speed,
max(max_wind_speed) AS max_wind_speed, max(max_wind_speed) AS max_wind_speed,
max(distance) AS max_distance, max(distance) AS max_distance,
sum(distance) AS sum_distance,
max(duration) AS max_duration, max(duration) AS max_duration,
sum(duration) AS sum_duration sum(duration) AS sum_duration
FROM logs_view l ) FROM logs_view l )
--select * from logbook; --select * from logbook;
-- Return a JSON -- Return a JSON
SELECT jsonb_build_object( SELECT jsonb_build_object(
'name', meta.name,
'first_date', first_date.first_date,
'last_date', last_date.last_date,
'max_speed_id', max_speed_id.id, 'max_speed_id', max_speed_id.id,
'max_wind_speed_id', max_wind_speed_id.id, 'max_wind_speed_id', max_wind_speed_id.id,
'max_duration_id', max_duration_id.id,
'max_distance_id', max_distance_id.id)::jsonb || to_jsonb(logs_stats.*)::jsonb INTO stats 'max_distance_id', max_distance_id.id)::jsonb || to_jsonb(logs_stats.*)::jsonb INTO stats
FROM max_speed_id, max_wind_speed_id, max_distance_id, logs_stats, max_duration_id; FROM max_speed_id, max_wind_speed_id, max_distance_id, max_duration_id,
logs_stats, meta, logs_view, first_date, last_date;
-- TODO Add moorages -- TODO Add moorages
END; END;
$stats_logs$ LANGUAGE plpgsql; $stats_logs$ LANGUAGE plpgsql;
@@ -403,3 +671,61 @@ $stats_logs$ LANGUAGE plpgsql;
COMMENT ON FUNCTION COMMENT ON FUNCTION
api.stats_logs_fn api.stats_logs_fn
IS 'Logs stats by date'; IS 'Logs stats by date';
DROP FUNCTION IF EXISTS api.stats_stays_fn;
CREATE OR REPLACE FUNCTION api.stats_stays_fn(
IN start_date TEXT DEFAULT NULL,
IN end_date TEXT DEFAULT NULL,
OUT stats JSON) RETURNS JSON AS $stats_stays$
DECLARE
_start_date TIMESTAMP WITHOUT TIME ZONE DEFAULT '1970-01-01';
_end_date TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW();
BEGIN
IF start_date IS NOT NULL AND public.isdate(start_date::text) AND public.isdate(end_date::text) THEN
RAISE NOTICE '--> stats_stays_fn, custom filter result stats by date [%]', start_date;
_start_date := start_date::TIMESTAMP WITHOUT TIME ZONE;
_end_date := end_date::TIMESTAMP WITHOUT TIME ZONE;
END IF;
RAISE NOTICE '--> stats_stays_fn, _start_date [%], _end_date [%]', _start_date, _end_date;
WITH
moorages_log AS (
SELECT s.id as stays_id, m.id as moorages_id, *
FROM api.stays s, api.moorages m
WHERE arrived >= _start_date::TIMESTAMP WITHOUT TIME ZONE
AND departed <= _end_date::TIMESTAMP WITHOUT TIME ZONE + interval '23 hours 59 minutes'
AND s.id = m.stay_id
),
home_ports AS (
select count(*) as home_ports from moorages_log m where home_flag is true
),
unique_moorage AS (
select count(*) as unique_moorage from moorages_log m
),
time_at_home_ports AS (
select sum(m.stay_duration) as time_at_home_ports from moorages_log m where home_flag is true
),
sum_stay_duration AS (
select sum(m.stay_duration) as sum_stay_duration from moorages_log m where home_flag is false
),
time_spent_away AS (
select m.stay_code,sum(m.stay_duration) as stay_duration from api.moorages m where home_flag is false group by m.stay_code order by m.stay_code
),
time_spent as (
select jsonb_agg(t.*) as time_spent_away from time_spent_away t
)
-- Return a JSON
SELECT jsonb_build_object(
'home_ports', home_ports.home_ports,
'unique_moorage', unique_moorage.unique_moorage,
'time_at_home_ports', time_at_home_ports.time_at_home_ports,
'sum_stay_duration', sum_stay_duration.sum_stay_duration,
'time_spent_away', time_spent.time_spent_away) INTO stats
FROM moorages_log, home_ports, unique_moorage,
time_at_home_ports, sum_stay_duration, time_spent;
-- TODO Add moorages
END;
$stats_stays$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.stats_stays_fn
IS 'Stays/Moorages stats by date';

View File

@@ -38,13 +38,13 @@ CREATE VIEW stay_in_progress AS
DROP VIEW IF EXISTS api.logs_view; DROP VIEW IF EXISTS api.logs_view;
CREATE OR REPLACE VIEW api.logs_view WITH (security_invoker=true,security_barrier=true) AS CREATE OR REPLACE VIEW api.logs_view WITH (security_invoker=true,security_barrier=true) AS
SELECT id, SELECT id,
name as "Name", name as "name",
_from as "From", _from as "from",
_from_time as "Started", _from_time as "started",
_to as "To", _to as "to",
_to_time as "Ended", _to_time as "ended",
distance as "Distance", distance as "distance",
duration as "Duration" duration as "duration"
FROM api.logbook l FROM api.logbook l
WHERE _to_time IS NOT NULL WHERE _to_time IS NOT NULL
ORDER BY _from_time DESC; ORDER BY _from_time DESC;
@@ -56,13 +56,13 @@ COMMENT ON VIEW
-- Initial try of MATERIALIZED VIEW -- Initial try of MATERIALIZED VIEW
CREATE MATERIALIZED VIEW api.logs_mat_view AS CREATE MATERIALIZED VIEW api.logs_mat_view AS
SELECT id, SELECT id,
name as "Name", name as "name",
_from as "From", _from as "from",
_from_time as "Started", _from_time as "started",
_to as "To", _to as "to",
_to_time as "Ended", _to_time as "ended",
distance as "Distance", distance as "distance",
duration as "Duration" duration as "duration"
FROM api.logbook l FROM api.logbook l
WHERE _to_time IS NOT NULL WHERE _to_time IS NOT NULL
ORDER BY _from_time DESC; ORDER BY _from_time DESC;
@@ -74,14 +74,14 @@ COMMENT ON MATERIALIZED VIEW
DROP VIEW IF EXISTS api.log_view; DROP VIEW IF EXISTS api.log_view;
CREATE OR REPLACE VIEW api.log_view WITH (security_invoker=true,security_barrier=true) AS CREATE OR REPLACE VIEW api.log_view WITH (security_invoker=true,security_barrier=true) AS
SELECT id, SELECT id,
name as "Name", name as "name",
_from as "From", _from as "from",
_from_time as "Started", _from_time as "started",
_to as "To", _to as "to",
_to_time as "Ended", _to_time as "ended",
distance as "Distance", distance as "distance",
duration as "Duration", duration as "duration",
notes as "Notes", notes as "notes",
track_geojson as geojson, track_geojson as geojson,
avg_speed as avg_speed, avg_speed as avg_speed,
max_speed as max_speed, max_speed as max_speed,
@@ -192,9 +192,9 @@ CREATE OR REPLACE VIEW api.moorages_view WITH (security_invoker=true,security_ba
-- m.stay_duration, -- m.stay_duration,
-- justify_hours ( m.stay_duration ) -- justify_hours ( m.stay_duration )
FROM api.moorages m, api.stays_at sa FROM api.moorages m, api.stays_at sa
WHERE m.name is not null WHERE m.name IS NOT NULL
AND m.stay_code = sa.stay_code
AND geog IS NOT NULL AND geog IS NOT NULL
AND m.stay_code = sa.stay_code
GROUP BY m.id,m.name,sa.description,m.stay_duration,m.reference_count,m.geog,sa.stay_code GROUP BY m.id,m.name,sa.description,m.stay_duration,m.reference_count,m.geog,sa.stay_code
-- ORDER BY 4 DESC; -- ORDER BY 4 DESC;
ORDER BY m.reference_count DESC; ORDER BY m.reference_count DESC;
@@ -207,15 +207,17 @@ DROP VIEW IF EXISTS api.moorage_view;
CREATE OR REPLACE VIEW api.moorage_view WITH (security_invoker=true,security_barrier=true) AS -- TODO CREATE OR REPLACE VIEW api.moorage_view WITH (security_invoker=true,security_barrier=true) AS -- TODO
SELECT id, SELECT id,
m.name AS Name, m.name AS Name,
m.stay_code AS Default_Stay, sa.description AS Default_Stay,
sa.stay_code AS Default_Stay_Id,
m.home_flag AS Home, m.home_flag AS Home,
EXTRACT(DAY FROM justify_hours ( m.stay_duration )) AS Total_Stay, EXTRACT(DAY FROM justify_hours ( m.stay_duration )) AS Total_Stay,
m.reference_count AS Arrivals_Departures, m.reference_count AS Arrivals_Departures,
m.notes m.notes
-- m.geog -- m.geog
FROM api.moorages m FROM api.moorages m, api.stays_at sa
WHERE m.name IS NOT NULL WHERE m.name IS NOT NULL
AND geog IS NOT NULL; AND geog IS NOT NULL
AND m.stay_code = sa.stay_code;
-- Description -- Description
COMMENT ON VIEW COMMENT ON VIEW
api.moorage_view api.moorage_view
@@ -255,12 +257,12 @@ CREATE OR REPLACE VIEW api.stats_logs_view WITH (security_invoker=true,security_
SELECT m.time FROM api.metrics m ORDER BY m.time ASC limit 1), SELECT m.time FROM api.metrics m ORDER BY m.time ASC limit 1),
logbook AS ( logbook AS (
SELECT SELECT
count(*) AS "Number of Log Entries", count(*) AS "number_of_log_entries",
max(l.max_speed) AS "Max Speed", max(l.max_speed) AS "max_speed",
max(l.max_wind_speed) AS "Max Wind Speed", max(l.max_wind_speed) AS "max_wind_speed",
sum(l.distance) AS "Total Distance", sum(l.distance) AS "total_distance",
sum(l.duration) AS "Total Time Underway", sum(l.duration) AS "total_time_underway",
concat( max(l.distance), ' NM, ', max(l.duration), ' hours') AS "Longest Nonstop Sail" concat( max(l.distance), ' NM, ', max(l.duration), ' hours') AS "longest_nonstop_sail"
FROM api.logbook l) FROM api.logbook l)
SELECT SELECT
m.name as Name, m.name as Name,
@@ -299,10 +301,10 @@ CREATE OR REPLACE VIEW api.stats_moorages_view WITH (security_invoker=true,secur
select sum(m.stay_duration) as time_spent_away from api.moorages m where home_flag is false select sum(m.stay_duration) as time_spent_away from api.moorages m where home_flag is false
) )
SELECT SELECT
home_ports.home_ports as "Home Ports", home_ports.home_ports as "home_ports",
unique_moorage.unique_moorage as "Unique Moorages", unique_moorage.unique_moorage as "unique_moorages",
time_at_home_ports.time_at_home_ports "Time Spent at Home Port(s)", time_at_home_ports.time_at_home_ports "time_spent_at_home_port(s)",
time_spent_away.time_spent_away as "Time Spent Away" time_spent_away.time_spent_away as "time_spent_away"
FROM home_ports, unique_moorage, time_at_home_ports, time_spent_away; FROM home_ports, unique_moorage, time_at_home_ports, time_spent_away;
COMMENT ON VIEW COMMENT ON VIEW
api.stats_moorages_view api.stats_moorages_view
@@ -344,8 +346,8 @@ CREATE VIEW api.monitoring_view WITH (security_invoker=true,security_barrier=tru
metrics-> 'environment.outside.temperature' AS outsideTemperature, metrics-> 'environment.outside.temperature' AS outsideTemperature,
metrics-> 'environment.wind.speedOverGround' AS windSpeedOverGround, metrics-> 'environment.wind.speedOverGround' AS windSpeedOverGround,
metrics-> 'environment.wind.directionGround' AS windDirectionGround, metrics-> 'environment.wind.directionGround' AS windDirectionGround,
metrics-> 'environment.inside.humidity' AS insideHumidity, metrics-> 'environment.inside.relativeHumidity' AS insideHumidity,
metrics-> 'environment.outside.humidity' AS outsideHumidity, metrics-> 'environment.outside.relativeHumidity' AS outsideHumidity,
metrics-> 'environment.outside.pressure' AS outsidePressure, metrics-> 'environment.outside.pressure' AS outsidePressure,
metrics-> 'environment.inside.pressure' AS insidePressure, metrics-> 'environment.inside.pressure' AS insidePressure,
metrics-> 'electrical.batteries.House.capacity.stateOfCharge' AS batteryCharge, metrics-> 'electrical.batteries.House.capacity.stateOfCharge' AS batteryCharge,
@@ -370,7 +372,7 @@ CREATE VIEW api.monitoring_humidity WITH (security_invoker=true,security_barrier
SELECT m.time, key, value SELECT m.time, key, value
FROM api.metrics m, FROM api.metrics m,
jsonb_each_text(m.metrics) jsonb_each_text(m.metrics)
WHERE key ILIKE 'environment.%.humidity' WHERE key ILIKE 'environment.%.humidity' OR key ILIKE 'environment.%.relativeHumidity'
ORDER BY m.time DESC; ORDER BY m.time DESC;
COMMENT ON VIEW COMMENT ON VIEW
api.monitoring_humidity api.monitoring_humidity

View File

@@ -14,13 +14,13 @@ declare
process_rec record; process_rec record;
begin begin
-- Check for new logbook pending update -- Check for new logbook pending update
RAISE NOTICE 'cron_process_new_logbook_fn'; RAISE NOTICE 'cron_process_new_logbook_fn init loop';
FOR process_rec in FOR process_rec in
SELECT * FROM process_queue SELECT * FROM process_queue
WHERE channel = 'new_logbook' AND processed IS NULL WHERE channel = 'new_logbook' AND processed IS NULL
ORDER BY stored ASC LIMIT 100 ORDER BY stored ASC LIMIT 100
LOOP LOOP
RAISE NOTICE '-> cron_process_new_logbook_fn [%]', process_rec.payload; RAISE NOTICE 'cron_process_new_logbook_fn processing queue [%] for logbook id [%]', process_rec.id, process_rec.payload;
-- update logbook -- update logbook
PERFORM process_logbook_queue_fn(process_rec.payload::INTEGER); PERFORM process_logbook_queue_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed -- update process_queue table , processed
@@ -28,7 +28,7 @@ begin
SET SET
processed = NOW() processed = NOW()
WHERE id = process_rec.id; WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_new_logbook_fn updated process_queue table [%]', process_rec.id; RAISE NOTICE 'cron_process_new_logbook_fn processed queue [%] for logbook id [%]', process_rec.id, process_rec.payload;
END LOOP; END LOOP;
END; END;
$$ language plpgsql; $$ language plpgsql;
@@ -43,13 +43,13 @@ declare
process_rec record; process_rec record;
begin begin
-- Check for new stay pending update -- Check for new stay pending update
RAISE NOTICE 'cron_process_new_stay_fn'; RAISE NOTICE 'cron_process_new_stay_fn init loop';
FOR process_rec in FOR process_rec in
SELECT * FROM process_queue SELECT * FROM process_queue
WHERE channel = 'new_stay' AND processed IS NULL WHERE channel = 'new_stay' AND processed IS NULL
ORDER BY stored ASC LIMIT 100 ORDER BY stored ASC LIMIT 100
LOOP LOOP
RAISE NOTICE '-> cron_process_new_stay_fn [%]', process_rec.payload; RAISE NOTICE 'cron_process_new_stay_fn processing queue [%] for stay id [%]', process_rec.id, process_rec.payload;
-- update stay -- update stay
PERFORM process_stay_queue_fn(process_rec.payload::INTEGER); PERFORM process_stay_queue_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed -- update process_queue table , processed
@@ -57,7 +57,7 @@ begin
SET SET
processed = NOW() processed = NOW()
WHERE id = process_rec.id; WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_new_stay_fn updated process_queue table [%]', process_rec.id; RAISE NOTICE 'cron_process_new_stay_fn processed queue [%] for stay id [%]', process_rec.id, process_rec.payload;
END LOOP; END LOOP;
END; END;
$$ language plpgsql; $$ language plpgsql;
@@ -73,13 +73,13 @@ declare
process_rec record; process_rec record;
begin begin
-- Check for new moorage pending update -- Check for new moorage pending update
RAISE NOTICE 'cron_process_new_moorage_fn'; RAISE NOTICE 'cron_process_new_moorage_fn init loop';
FOR process_rec in FOR process_rec in
SELECT * FROM process_queue SELECT * FROM process_queue
WHERE channel = 'new_moorage' AND processed IS NULL WHERE channel = 'new_moorage' AND processed IS NULL
ORDER BY stored ASC LIMIT 100 ORDER BY stored ASC LIMIT 100
LOOP LOOP
RAISE NOTICE '-> cron_process_new_moorage_fn [%]', process_rec.payload; RAISE NOTICE 'cron_process_new_moorage_fn processing queue [%] for moorage id [%]', process_rec.id, process_rec.payload;
-- update moorage -- update moorage
PERFORM process_moorage_queue_fn(process_rec.payload::INTEGER); PERFORM process_moorage_queue_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed -- update process_queue table , processed
@@ -87,7 +87,7 @@ begin
SET SET
processed = NOW() processed = NOW()
WHERE id = process_rec.id; WHERE id = process_rec.id;
RAISE NOTICE '-> cron_process_new_moorage_fn updated process_queue table [%]', process_rec.id; RAISE NOTICE 'cron_process_new_moorage_fn processed queue [%] for moorage id [%]', process_rec.id, process_rec.payload;
END LOOP; END LOOP;
END; END;
$$ language plpgsql; $$ language plpgsql;
@@ -127,12 +127,12 @@ begin
IF metadata_rec.vessel_id IS NULL OR metadata_rec.vessel_id = '' THEN IF metadata_rec.vessel_id IS NULL OR metadata_rec.vessel_id = '' THEN
RAISE WARNING '-> cron_process_monitor_offline_fn invalid metadata record vessel_id %', vessel_id; RAISE WARNING '-> cron_process_monitor_offline_fn invalid metadata record vessel_id %', vessel_id;
RAISE EXCEPTION 'Invalid metadata' RAISE EXCEPTION 'Invalid metadata'
USING HINT = 'Unknow vessel_id'; USING HINT = 'Unknown vessel_id';
RETURN; RETURN;
END IF; END IF;
PERFORM set_config('vessel.id', metadata_rec.vessel_id, false); PERFORM set_config('vessel.id', metadata_rec.vessel_id, false);
RAISE DEBUG '-> DEBUG cron_process_monitor_offline_fn vessel.id %', current_setting('vessel.id', false); RAISE DEBUG '-> DEBUG cron_process_monitor_offline_fn vessel.id %', current_setting('vessel.id', false);
RAISE NOTICE '-> cron_process_monitor_offline_fn updated api.metadata table to inactive for [%] [%]', metadata_rec.id, metadata_rec.vessel_id; RAISE NOTICE 'cron_process_monitor_offline_fn updated api.metadata table to inactive for [%] [%]', metadata_rec.id, metadata_rec.vessel_id;
-- Gather email and pushover app settings -- Gather email and pushover app settings
--app_settings = get_app_settings_fn(); --app_settings = get_app_settings_fn();
@@ -182,7 +182,7 @@ begin
IF metadata_rec.vessel_id IS NULL OR metadata_rec.vessel_id = '' THEN IF metadata_rec.vessel_id IS NULL OR metadata_rec.vessel_id = '' THEN
RAISE WARNING '-> cron_process_monitor_online_fn invalid metadata record vessel_id %', vessel_id; RAISE WARNING '-> cron_process_monitor_online_fn invalid metadata record vessel_id %', vessel_id;
RAISE EXCEPTION 'Invalid metadata' RAISE EXCEPTION 'Invalid metadata'
USING HINT = 'Unknow vessel_id'; USING HINT = 'Unknown vessel_id';
RETURN; RETURN;
END IF; END IF;
PERFORM set_config('vessel.id', metadata_rec.vessel_id, false); PERFORM set_config('vessel.id', metadata_rec.vessel_id, false);
@@ -348,21 +348,6 @@ COMMENT ON FUNCTION
public.cron_vacuum_fn public.cron_vacuum_fn
IS 'init by pg_cron to full vacuum tables on schema api'; IS 'init by pg_cron to full vacuum tables on schema api';
-- CRON for clean up job details logs
CREATE FUNCTION job_run_details_cleanup_fn() RETURNS void AS $$
DECLARE
BEGIN
-- Remove job run log older than 3 months
RAISE NOTICE 'job_run_details_cleanup_fn';
DELETE FROM postgres.cron.job_run_details
WHERE start_time <= NOW() AT TIME ZONE 'UTC' - INTERVAL '91 DAYS';
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.job_run_details_cleanup_fn
IS 'init by pg_cron to cleanup job_run_details table on schema public postgres db';
-- CRON for alerts notification -- CRON for alerts notification
CREATE FUNCTION cron_process_alerts_fn() RETURNS void AS $$ CREATE FUNCTION cron_process_alerts_fn() RETURNS void AS $$
DECLARE DECLARE
@@ -385,4 +370,177 @@ $$ language plpgsql;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
public.cron_process_alerts_fn public.cron_process_alerts_fn
IS 'init by pg_cron to check for alerts, if so perform process_alerts_queue_fn'; IS 'init by pg_cron to check for alerts';
-- CRON for no vessel notification
CREATE FUNCTION cron_process_no_vessel_fn() RETURNS void AS $no_vessel$
DECLARE
no_vessel record;
user_settings jsonb;
BEGIN
-- Check for user with no vessel register
RAISE NOTICE 'cron_process_no_vessel_fn';
FOR no_vessel in
SELECT a.user_id,a.email,a.first
FROM auth.accounts a
WHERE NOT EXISTS (
SELECT *
FROM auth.vessels v
WHERE v.owner_email = a.email)
LOOP
RAISE NOTICE '-> cron_process_no_vessel_rec_fn for [%]', no_vessel;
SELECT json_build_object('email', no_vessel.email, 'recipient', no_vessel.first) into user_settings;
RAISE NOTICE '-> debug cron_process_no_vessel_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_vessel'::TEXT, user_settings::JSONB);
END LOOP;
END;
$no_vessel$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_no_vessel_fn
IS 'init by pg_cron, check for user with no vessel register then send notification';
-- CRON for no metadata notification
CREATE FUNCTION cron_process_no_metadata_fn() RETURNS void AS $no_metadata$
DECLARE
no_metadata_rec record;
user_settings jsonb;
BEGIN
-- Check for vessel register but with no metadata
RAISE NOTICE 'cron_process_no_metadata_fn';
FOR no_metadata_rec in
SELECT
a.user_id,a.email,a.first
FROM auth.accounts a, auth.vessels v
WHERE NOT EXISTS (
SELECT *
FROM api.metadata m
WHERE v.vessel_id = m.vessel_id) AND v.owner_email = a.email
LOOP
RAISE NOTICE '-> cron_process_no_metadata_rec_fn for [%]', no_metadata_rec;
SELECT json_build_object('email', no_metadata_rec.email, 'recipient', no_metadata_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_no_metadata_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_metadata'::TEXT, user_settings::JSONB);
END LOOP;
END;
$no_metadata$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_no_metadata_fn
IS 'init by pg_cron, check for vessel with no metadata then send notification';
-- CRON for no activity notification
CREATE FUNCTION cron_process_no_activity_fn() RETURNS void AS $no_activity$
DECLARE
no_activity_rec record;
user_settings jsonb;
BEGIN
-- Check for vessel with no activity for more than 200 days
RAISE NOTICE 'cron_process_no_activity_fn';
FOR no_activity_rec in
SELECT
v.owner_email,m.name,m.vessel_id,m.time,a.first
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '200 DAYS'
LOOP
RAISE NOTICE '-> cron_process_no_activity_rec_fn for [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_no_activity_rec_fn [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('no_activity'::TEXT, user_settings::JSONB);
END LOOP;
END;
$no_activity$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_no_activity_fn
IS 'init by pg_cron, check for vessel with no activity for more than 200 days then send notification';
-- CRON for deactivated/deletion
CREATE FUNCTION cron_process_deactivated_fn() RETURNS void AS $deactivated$
DECLARE
no_activity_rec record;
user_settings jsonb;
BEGIN
RAISE NOTICE 'cron_process_deactivated_fn';
-- List accounts with vessel inactivity for more than 1 YEAR
FOR no_activity_rec in
SELECT
v.owner_email,m.name,m.vessel_id,m.time,a.first
FROM auth.accounts a
LEFT JOIN auth.vessels v ON v.owner_email = a.email
LEFT JOIN api.metadata m ON v.vessel_id = m.vessel_id
WHERE m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '1 YEAR'
LOOP
RAISE NOTICE '-> cron_process_deactivated_rec_fn for inactivity [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_deactivated_rec_fn inactivity [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
--PERFORM public.delete_account_fn(no_activity_rec.owner_email::TEXT, no_activity_rec.vessel_id::TEXT);
END LOOP;
-- List accounts with no vessel metadata for more than 1 YEAR
FOR no_activity_rec in
SELECT
a.user_id,a.email,a.first,a.created_at
FROM auth.accounts a, auth.vessels v
WHERE NOT EXISTS (
SELECT *
FROM api.metadata m
WHERE v.vessel_id = m.vessel_id) AND v.owner_email = a.email
AND v.created_at < NOW() AT TIME ZONE 'UTC' - INTERVAL '1 YEAR'
LOOP
RAISE NOTICE '-> cron_process_deactivated_rec_fn for no metadata [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_deactivated_rec_fn no metadata [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
--PERFORM public.delete_account_fn(no_activity_rec.owner_email::TEXT, no_activity_rec.vessel_id::TEXT);
END LOOP;
-- List accounts with no vessel created for more than 1 YEAR
FOR no_activity_rec in
SELECT a.user_id,a.email,a.first,a.created_at
FROM auth.accounts a
WHERE NOT EXISTS (
SELECT *
FROM auth.vessels v
WHERE v.owner_email = a.email)
AND a.created_at < NOW() AT TIME ZONE 'UTC' - INTERVAL '1 YEAR'
LOOP
RAISE NOTICE '-> cron_process_deactivated_rec_fn for no vessel [%]', no_activity_rec;
SELECT json_build_object('email', no_activity_rec.owner_email, 'recipient', no_activity_rec.first) into user_settings;
RAISE NOTICE '-> debug cron_process_deactivated_rec_fn no vessel [%]', user_settings;
-- Send notification
PERFORM send_notification_fn('deactivated'::TEXT, user_settings::JSONB);
--PERFORM public.delete_account_fn(no_activity_rec.owner_email::TEXT, no_activity_rec.vessel_id::TEXT);
END LOOP;
END;
$deactivated$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_deactivated_fn
IS 'init by pg_cron, check for vessel with no activity for more than 1 year then send notification and delete data';
-- Need to be in the postgres database.
\c postgres
-- CRON for clean up job details logs
CREATE FUNCTION job_run_details_cleanup_fn() RETURNS void AS $$
DECLARE
BEGIN
-- Remove job run log older than 3 months
RAISE NOTICE 'job_run_details_cleanup_fn';
DELETE FROM cron.job_run_details
WHERE start_time <= NOW() AT TIME ZONE 'UTC' - INTERVAL '91 DAYS';
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.job_run_details_cleanup_fn
IS 'init by pg_cron to cleanup job_run_details table on schema public postgres db';

View File

@@ -52,7 +52,7 @@ COMMENT ON TABLE
-- with escape value, eg: E'A\nB\r\nC' -- with escape value, eg: E'A\nB\r\nC'
-- https://stackoverflow.com/questions/26638615/insert-line-break-in-postgresql-when-updating-text-field -- https://stackoverflow.com/questions/26638615/insert-line-break-in-postgresql-when-updating-text-field
-- TODO Update notification subject for log entry to 'logbook #NB ...' -- TODO Update notification subject for log entry to 'logbook #NB ...'
INSERT INTO email_templates VALUES INSERT INTO public.email_templates VALUES
('logbook', ('logbook',
'New Logbook Entry', 'New Logbook Entry',
E'Hello __RECIPIENT__,\n\nWe just wanted to let you know that you have a new entry on openplotter.cloud: "__LOGBOOK_NAME__"\r\n\r\nSee more details at __APP_URL__/log/__LOGBOOK_LINK__\n\nHappy sailing!\nThe PostgSail Team', E'Hello __RECIPIENT__,\n\nWe just wanted to let you know that you have a new entry on openplotter.cloud: "__LOGBOOK_NAME__"\r\n\r\nSee more details at __APP_URL__/log/__LOGBOOK_LINK__\n\nHappy sailing!\nThe PostgSail Team',
@@ -64,19 +64,19 @@ INSERT INTO email_templates VALUES
'Welcome', 'Welcome',
E'Hi!\nYou successfully created an account\nKeep in mind to register your vessel.\n'), E'Hi!\nYou successfully created an account\nKeep in mind to register your vessel.\n'),
('new_vessel', ('new_vessel',
'New vessel', 'New boat',
E'Hi!\nHow are you?\n__BOAT__ is now linked to your account.\n', E'Hi!\nHow are you?\n__BOAT__ is now linked to your account.\n',
'New vessel', 'New boat',
E'Hi!\nHow are you?\n__BOAT__ is now linked to your account.\n'), E'Hi!\nHow are you?\n__BOAT__ is now linked to your account.\n'),
('monitor_offline', ('monitor_offline',
'Vessel Offline', 'Boat went Offline',
E'__BOAT__ has been offline for more than an hour\r\nFind more details at __APP_URL__/boats\n', E'__BOAT__ has been offline for more than an hour\r\nFind more details at __APP_URL__/boats\n',
'Vessel Offline', 'Boat went Offline',
E'__BOAT__ has been offline for more than an hour\r\nFind more details at __APP_URL__/boats\n'), E'__BOAT__ has been offline for more than an hour\r\nFind more details at __APP_URL__/boats\n'),
('monitor_online', ('monitor_online',
'Vessel Online', 'Boat went Online',
E'__BOAT__ just came online\nFind more details at __APP_URL__/boats\n', E'__BOAT__ just came online\nFind more details at __APP_URL__/boats\n',
'Vessel Online', 'Boat went Offline',
E'__BOAT__ just came online\nFind more details at __APP_URL__/boats\n'), E'__BOAT__ just came online\nFind more details at __APP_URL__/boats\n'),
('new_badge', ('new_badge',
'New Badge!', 'New Badge!',
@@ -112,7 +112,27 @@ INSERT INTO email_templates VALUES
'Telegram bot', 'Telegram bot',
E'Hello __RECIPIENT__,\nCongratulations! You have just connect your account to your vessel, @postgsail_bot.\n\nThe PostgSail Team', E'Hello __RECIPIENT__,\nCongratulations! You have just connect your account to your vessel, @postgsail_bot.\n\nThe PostgSail Team',
'Telegram bot!', 'Telegram bot!',
E'Congratulations!\nYou have just connect your account to your vessel, @postgsail_bot.\n'); E'Congratulations!\nYou have just connect your account to your vessel, @postgsail_bot.\n'),
('no_vessel',
'PostgSail add your boat',
E'Hello __RECIPIENT__,\nYou have created an account on PostgSail but you have not created your boat yet.\nIf you need any assistance I would be happy to help. It is free and an open-source.\nThe PostgSail Team',
'PostgSail next step',
E'Hello,\nYou should create your vessel. Check your email!\n'),
('no_metadata',
'PostgSail connect your boat',
E'Hello __RECIPIENT__,\nYou have created an account on PostgSail but you have not connected your boat yet.\nIf you need any assistance I would be happy to help. It is free and an open-source.\nThe PostgSail Team',
'PostgSail next step',
E'Hello,\nYou should connect your vessel. Check your email!\n'),
('no_activity',
'PostgSail boat inactivity',
E'Hello __RECIPIENT__,\nWe don\'t see any activity on your account, do you need any assistance?\nIf you need any assistance I would be happy to help. It is free and an open-source.\nThe PostgSail Team',
'PostgSail inactivity!',
E'We detected inactivity. Check your email!\n'),
('deactivated',
'PostgSail account deactivated',
E'Hello __RECIPIENT__,\nYour account has been deactivated and all your data has been removed from PostgSail system.',
'PostgSail deactivated!',
E'We removed your account. Check your email!\n');
--------------------------------------------------------------------------- ---------------------------------------------------------------------------
-- Queue handling -- Queue handling

View File

@@ -15,7 +15,7 @@ CREATE SCHEMA IF NOT EXISTS public;
-- process single cron event, process_[logbook|stay|moorage]_queue_fn() -- process single cron event, process_[logbook|stay|moorage]_queue_fn()
-- --
CREATE OR REPLACE FUNCTION logbook_metrics_dwithin_fn( CREATE OR REPLACE FUNCTION public.logbook_metrics_dwithin_fn(
IN _start text, IN _start text,
IN _end text, IN _end text,
IN lgn float, IN lgn float,
@@ -33,18 +33,18 @@ CREATE OR REPLACE FUNCTION logbook_metrics_dwithin_fn(
AND ST_DWithin( AND ST_DWithin(
Geography(ST_MakePoint(m.longitude, m.latitude)), Geography(ST_MakePoint(m.longitude, m.latitude)),
Geography(ST_MakePoint(lgn, lat)), Geography(ST_MakePoint(lgn, lat)),
10 15
); );
END; END;
$logbook_metrics_dwithin$ LANGUAGE plpgsql; $logbook_metrics_dwithin$ LANGUAGE plpgsql;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
public.logbook_metrics_dwithin_fn public.logbook_metrics_dwithin_fn
IS 'Check if all entries for a logbook are in stationary movement with 10 meters'; IS 'Check if all entries for a logbook are in stationary movement with 15 meters';
-- Update a logbook with avg data -- Update a logbook with avg data
-- TODO using timescale function -- TODO using timescale function
CREATE OR REPLACE FUNCTION logbook_update_avg_fn( CREATE OR REPLACE FUNCTION public.logbook_update_avg_fn(
IN _id integer, IN _id integer,
IN _start TEXT, IN _start TEXT,
IN _end TEXT, IN _end TEXT,
@@ -54,7 +54,7 @@ CREATE OR REPLACE FUNCTION logbook_update_avg_fn(
OUT count_metric integer OUT count_metric integer
) AS $logbook_update_avg$ ) AS $logbook_update_avg$
BEGIN BEGIN
RAISE NOTICE '-> Updating avg for logbook id=%, start:"%", end:"%"', _id, _start, _end; RAISE NOTICE '-> logbook_update_avg_fn calculate avg for logbook id=%, start:"%", end:"%"', _id, _start, _end;
SELECT AVG(speedoverground), MAX(speedoverground), MAX(windspeedapparent), COUNT(*) INTO SELECT AVG(speedoverground), MAX(speedoverground), MAX(windspeedapparent), COUNT(*) INTO
avg_speed, max_speed, max_wind_speed, count_metric avg_speed, max_speed, max_wind_speed, count_metric
FROM api.metrics m FROM api.metrics m
@@ -63,7 +63,7 @@ CREATE OR REPLACE FUNCTION logbook_update_avg_fn(
AND m.time >= _start::TIMESTAMP WITHOUT TIME ZONE AND m.time >= _start::TIMESTAMP WITHOUT TIME ZONE
AND m.time <= _end::TIMESTAMP WITHOUT TIME ZONE AND m.time <= _end::TIMESTAMP WITHOUT TIME ZONE
AND vessel_id = current_setting('vessel.id', false); AND vessel_id = current_setting('vessel.id', false);
RAISE NOTICE '-> Updated avg for logbook id=%, avg_speed:%, max_speed:%, max_wind_speed:%, count:%', _id, avg_speed, max_speed, max_wind_speed, count_metric; RAISE NOTICE '-> logbook_update_avg_fn avg for logbook id=%, avg_speed:%, max_speed:%, max_wind_speed:%, count:%', _id, avg_speed, max_speed, max_wind_speed, count_metric;
END; END;
$logbook_update_avg$ LANGUAGE plpgsql; $logbook_update_avg$ LANGUAGE plpgsql;
-- Description -- Description
@@ -74,8 +74,8 @@ COMMENT ON FUNCTION
-- Create a LINESTRING for Geometry -- Create a LINESTRING for Geometry
-- Todo validate st_length unit? -- Todo validate st_length unit?
-- https://postgis.net/docs/ST_Length.html -- https://postgis.net/docs/ST_Length.html
DROP FUNCTION IF EXISTS logbook_update_geom_distance_fn; DROP FUNCTION IF EXISTS public.logbook_update_geom_distance_fn;
CREATE FUNCTION logbook_update_geom_distance_fn(IN _id integer, IN _start text, IN _end text, CREATE FUNCTION public.logbook_update_geom_distance_fn(IN _id integer, IN _start text, IN _end text,
OUT _track_geom Geometry(LINESTRING), OUT _track_geom Geometry(LINESTRING),
OUT _track_distance double precision OUT _track_distance double precision
) AS $logbook_geo_distance$ ) AS $logbook_geo_distance$
@@ -109,7 +109,7 @@ COMMENT ON FUNCTION
IS 'Update logbook details with geometry data an distance, ST_Length in Nautical Mile (international)'; IS 'Update logbook details with geometry data an distance, ST_Length in Nautical Mile (international)';
-- Create GeoJSON for api consume. -- Create GeoJSON for api consume.
CREATE FUNCTION logbook_update_geojson_fn(IN _id integer, IN _start text, IN _end text, CREATE FUNCTION public.logbook_update_geojson_fn(IN _id integer, IN _start text, IN _end text,
OUT _track_geojson JSON OUT _track_geojson JSON
) AS $logbook_geojson$ ) AS $logbook_geojson$
declare declare
@@ -121,12 +121,12 @@ CREATE FUNCTION logbook_update_geojson_fn(IN _id integer, IN _start text, IN _en
SELECT SELECT
ST_AsGeoJSON(log.*) into log_geojson ST_AsGeoJSON(log.*) into log_geojson
FROM FROM
( select ( SELECT
id,name, id,name,
distance, distance,
duration, duration,
avg_speed, avg_speed,
avg_speed, max_speed,
max_wind_speed, max_wind_speed,
_from_time, _from_time,
notes, notes,
@@ -138,7 +138,7 @@ CREATE FUNCTION logbook_update_geojson_fn(IN _id integer, IN _start text, IN _en
SELECT SELECT
json_agg(ST_AsGeoJSON(t.*)::json) into metrics_geojson json_agg(ST_AsGeoJSON(t.*)::json) into metrics_geojson
FROM ( FROM (
( select ( SELECT
time, time,
courseovergroundtrue, courseovergroundtrue,
speedoverground, speedoverground,
@@ -156,7 +156,7 @@ CREATE FUNCTION logbook_update_geojson_fn(IN _id integer, IN _start text, IN _en
) AS t; ) AS t;
-- Merge jsonb -- Merge jsonb
select log_geojson::jsonb || metrics_geojson::jsonb into _map; SELECT log_geojson::jsonb || metrics_geojson::jsonb into _map;
-- output -- output
SELECT SELECT
json_build_object( json_build_object(
@@ -195,7 +195,7 @@ AS $logbook_update_gpx$
RAISE WARNING '-> logbook_update_gpx_fn invalid logbook %', _id; RAISE WARNING '-> logbook_update_gpx_fn invalid logbook %', _id;
RETURN; RETURN;
END IF; END IF;
-- Gathe url from app settings -- Gather url from app settings
app_settings := get_app_settings_fn(); app_settings := get_app_settings_fn();
--RAISE DEBUG '-> logbook_update_gpx_fn app_settings %', app_settings; --RAISE DEBUG '-> logbook_update_gpx_fn app_settings %', app_settings;
-- Generate XML -- Generate XML
@@ -213,7 +213,7 @@ AS $logbook_update_gpx$
xmlelement(name desc, log_rec.notes), xmlelement(name desc, log_rec.notes),
xmlelement(name link, xmlattributes(concat(app_settings->>'app.url', '/log/', log_rec.id) as href), xmlelement(name link, xmlattributes(concat(app_settings->>'app.url', '/log/', log_rec.id) as href),
xmlelement(name text, log_rec.name)), xmlelement(name text, log_rec.name)),
xmlelement(name extensions, xmlelement(name "postgsail:log_id", 1), xmlelement(name extensions, xmlelement(name "postgsail:log_id", log_rec.id),
xmlelement(name "postgsail:link", concat(app_settings->>'app.url','/log/', log_rec.id)), xmlelement(name "postgsail:link", concat(app_settings->>'app.url','/log/', log_rec.id)),
xmlelement(name "opencpn:guid", uuid_generate_v4()), xmlelement(name "opencpn:guid", uuid_generate_v4()),
xmlelement(name "opencpn:viz", '1'), xmlelement(name "opencpn:viz", '1'),
@@ -230,9 +230,9 @@ AS $logbook_update_gpx$
AND m.longitude IS NOT NULL AND m.longitude IS NOT NULL
AND m.time >= log_rec._from_time::TIMESTAMP WITHOUT TIME ZONE AND m.time >= log_rec._from_time::TIMESTAMP WITHOUT TIME ZONE
AND m.time <= log_rec._to_time::TIMESTAMP WITHOUT TIME ZONE AND m.time <= log_rec._to_time::TIMESTAMP WITHOUT TIME ZONE
AND vessel_id = log_rec.vessel_id; AND vessel_id = log_rec.vessel_id
-- ERROR: column "m.time" must appear in the GROUP BY clause or be used in an aggregate function at character 2304 GROUP BY m.time
--ORDER BY m.time ASC; ORDER BY m.time ASC;
END; END;
$logbook_update_gpx$ LANGUAGE plpgsql; $logbook_update_gpx$ LANGUAGE plpgsql;
-- Description -- Description
@@ -257,8 +257,8 @@ AS $logbook_get_extra_json$
AND vessel_id = current_setting('vessel.id', false) AND vessel_id = current_setting('vessel.id', false)
LOOP LOOP
-- Engine Hours in seconds -- Engine Hours in seconds
raise notice '-> logbook_get_extra_json_fn metric: %', metric_rec; RAISE NOTICE '-> logbook_get_extra_json_fn metric: %', metric_rec;
with WITH
end_metric AS ( end_metric AS (
-- Fetch 'tanks.%.currentVolume' last entry -- Fetch 'tanks.%.currentVolume' last entry
SELECT key, value SELECT key, value
@@ -274,7 +274,7 @@ AS $logbook_get_extra_json$
) )
-- Generate JSON -- Generate JSON
SELECT jsonb_build_object(metric_rec.key, metric.value) INTO metric_json FROM metrics; SELECT jsonb_build_object(metric_rec.key, metric.value) INTO metric_json FROM metrics;
raise notice '-> logbook_get_extra_json_fn key: %, value: %', metric_rec.key, metric_json; RAISE NOTICE '-> logbook_get_extra_json_fn key: %, value: %', metric_rec.key, metric_json;
END LOOP; END LOOP;
END; END;
$logbook_get_extra_json$ LANGUAGE plpgsql; $logbook_get_extra_json$ LANGUAGE plpgsql;
@@ -319,7 +319,7 @@ CREATE FUNCTION logbook_update_extra_json_fn(IN _id integer, IN _start text, IN
) )
-- Generate JSON -- Generate JSON
SELECT jsonb_build_object('navigation.log', trip) INTO log_json FROM nm; SELECT jsonb_build_object('navigation.log', trip) INTO log_json FROM nm;
raise notice '-> logbook_update_extra_json_fn navigation.log: %', log_json; RAISE NOTICE '-> logbook_update_extra_json_fn navigation.log: %', log_json;
-- Calculate engine hours from propulsion.%.runTime first entry -- Calculate engine hours from propulsion.%.runTime first entry
FOR metric_rec IN FOR metric_rec IN
@@ -331,7 +331,7 @@ CREATE FUNCTION logbook_update_extra_json_fn(IN _id integer, IN _start text, IN
AND vessel_id = current_setting('vessel.id', false) AND vessel_id = current_setting('vessel.id', false)
LOOP LOOP
-- Engine Hours in seconds -- Engine Hours in seconds
raise notice '-> logbook_update_extra_json_fn propulsion.*.runTime: %', metric_rec; RAISE NOTICE '-> logbook_update_extra_json_fn propulsion.*.runTime: %', metric_rec;
with with
end_runtime AS ( end_runtime AS (
-- Fetch 'propulsion.*.runTime' last entry -- Fetch 'propulsion.*.runTime' last entry
@@ -348,13 +348,13 @@ CREATE FUNCTION logbook_update_extra_json_fn(IN _id integer, IN _start text, IN
) )
-- Generate JSON -- Generate JSON
SELECT jsonb_build_object(metric_rec.key, runtime.value) INTO runtime_json FROM runtime; SELECT jsonb_build_object(metric_rec.key, runtime.value) INTO runtime_json FROM runtime;
raise notice '-> logbook_update_extra_json_fn key: %, value: %', metric_rec.key, runtime_json; RAISE NOTICE '-> logbook_update_extra_json_fn key: %, value: %', metric_rec.key, runtime_json;
END LOOP; END LOOP;
-- Update logbook with extra value and return json -- Update logbook with extra value and return json
SELECT COALESCE(log_json::JSONB, '{}'::jsonb) || COALESCE(runtime_json::JSONB, '{}'::jsonb) INTO metrics_json; SELECT COALESCE(log_json::JSONB, '{}'::jsonb) || COALESCE(runtime_json::JSONB, '{}'::jsonb) INTO metrics_json;
SELECT jsonb_build_object('metrics', metrics_json, 'observations', obs_json) INTO _extra_json; SELECT jsonb_build_object('metrics', metrics_json, 'observations', obs_json) INTO _extra_json;
raise notice '-> logbook_update_extra_json_fn log_json: %, runtime_json: %, _extra_json: %', log_json, runtime_json, _extra_json; RAISE NOTICE '-> logbook_update_extra_json_fn log_json: %, runtime_json: %, _extra_json: %', log_json, runtime_json, _extra_json;
END; END;
$logbook_extra_json$ LANGUAGE plpgsql; $logbook_extra_json$ LANGUAGE plpgsql;
-- Description -- Description
@@ -385,6 +385,7 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
current_stays_id numeric; current_stays_id numeric;
current_stays_active boolean; current_stays_active boolean;
extra_json jsonb; extra_json jsonb;
geo jsonb;
BEGIN BEGIN
-- If _id is not NULL -- If _id is not NULL
IF _id IS NULL OR _id < 1 THEN IF _id IS NULL OR _id < 1 THEN
@@ -475,8 +476,10 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
-- Generate logbook name, concat _from_location and _to_location -- Generate logbook name, concat _from_location and _to_location
-- geo reverse _from_lng _from_lat -- geo reverse _from_lng _from_lat
-- geo reverse _to_lng _to_lat -- geo reverse _to_lng _to_lat
from_name := reverse_geocode_py_fn('nominatim', logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC); geo := reverse_geocode_py_fn('nominatim', logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
to_name := reverse_geocode_py_fn('nominatim', logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC); from_name := geo->>'name';
geo := reverse_geocode_py_fn('nominatim', logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC);
to_name := geo->>'name';
SELECT CONCAT(from_name, ' to ' , to_name) INTO log_name; SELECT CONCAT(from_name, ' to ' , to_name) INTO log_name;
-- Process `propulsion.*.runTime` and `navigation.log` -- Process `propulsion.*.runTime` and `navigation.log`
@@ -506,22 +509,22 @@ CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void
WHERE id = logbook_rec.id; WHERE id = logbook_rec.id;
-- GPX field -- GPX field
gpx := logbook_update_gpx_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT); --gpx := logbook_update_gpx_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
UPDATE api.logbook --UPDATE api.logbook
SET -- SET
track_gpx = gpx -- track_gpx = gpx
WHERE id = logbook_rec.id; -- WHERE id = logbook_rec.id;
-- Prepare notification, gather user settings -- Prepare notification, gather user settings
SELECT json_build_object('logbook_name', log_name, 'logbook_link', logbook_rec.id) into log_settings; SELECT json_build_object('logbook_name', log_name, 'logbook_link', logbook_rec.id) into log_settings;
user_settings := get_user_settings_from_vesselid_fn(logbook_rec.vessel_id::TEXT); user_settings := get_user_settings_from_vesselid_fn(logbook_rec.vessel_id::TEXT);
SELECT user_settings::JSONB || log_settings::JSONB into user_settings; SELECT user_settings::JSONB || log_settings::JSONB into user_settings;
RAISE DEBUG '-> debug process_logbook_queue_fn get_user_settings_from_vesselid_fn [%]', user_settings; RAISE NOTICE '-> debug process_logbook_queue_fn get_user_settings_from_vesselid_fn [%]', user_settings;
RAISE DEBUG '-> debug process_logbook_queue_fn log_settings [%]', log_settings; RAISE NOTICE '-> debug process_logbook_queue_fn log_settings [%]', log_settings;
-- Send notification -- Send notification
PERFORM send_notification_fn('logbook'::TEXT, user_settings::JSONB); PERFORM send_notification_fn('logbook'::TEXT, user_settings::JSONB);
-- Process badges -- Process badges
RAISE DEBUG '-> debug process_logbook_queue_fn user_settings [%]', user_settings->>'email'::TEXT; RAISE NOTICE '-> debug process_logbook_queue_fn user_settings [%]', user_settings->>'email'::TEXT;
PERFORM set_config('user.email', user_settings->>'email'::TEXT, false); PERFORM set_config('user.email', user_settings->>'email'::TEXT, false);
PERFORM badges_logbook_fn(logbook_rec.id); PERFORM badges_logbook_fn(logbook_rec.id);
PERFORM badges_geom_fn(logbook_rec.id); PERFORM badges_geom_fn(logbook_rec.id);
@@ -537,7 +540,7 @@ DROP FUNCTION IF EXISTS process_stay_queue_fn;
CREATE OR REPLACE FUNCTION process_stay_queue_fn(IN _id integer) RETURNS void AS $process_stay_queue$ CREATE OR REPLACE FUNCTION process_stay_queue_fn(IN _id integer) RETURNS void AS $process_stay_queue$
DECLARE DECLARE
stay_rec record; stay_rec record;
_name varchar; geo jsonb;
BEGIN BEGIN
RAISE NOTICE 'process_stay_queue_fn'; RAISE NOTICE 'process_stay_queue_fn';
-- If _id is valid, not NULL -- If _id is valid, not NULL
@@ -559,12 +562,12 @@ CREATE OR REPLACE FUNCTION process_stay_queue_fn(IN _id integer) RETURNS void AS
PERFORM set_config('vessel.id', stay_rec.vessel_id, false); PERFORM set_config('vessel.id', stay_rec.vessel_id, false);
-- geo reverse _lng _lat -- geo reverse _lng _lat
_name := reverse_geocode_py_fn('nominatim', stay_rec.longitude::NUMERIC, stay_rec.latitude::NUMERIC); geo := reverse_geocode_py_fn('nominatim', stay_rec.longitude::NUMERIC, stay_rec.latitude::NUMERIC);
RAISE NOTICE 'Updating stay entry [%]', stay_rec.id; RAISE NOTICE 'Updating stay entry [%]', stay_rec.id;
UPDATE api.stays UPDATE api.stays
SET SET
name = _name, name = coalesce(geo->>'name', null),
geog = Geography(ST_MakePoint(stay_rec.longitude, stay_rec.latitude)) geog = Geography(ST_MakePoint(stay_rec.longitude, stay_rec.latitude))
WHERE id = stay_rec.id; WHERE id = stay_rec.id;
@@ -585,6 +588,7 @@ CREATE OR REPLACE FUNCTION process_moorage_queue_fn(IN _id integer) RETURNS void
stay_rec record; stay_rec record;
moorage_rec record; moorage_rec record;
user_settings jsonb; user_settings jsonb;
geo jsonb;
BEGIN BEGIN
RAISE NOTICE 'process_moorage_queue_fn'; RAISE NOTICE 'process_moorage_queue_fn';
-- If _id is not NULL -- If _id is not NULL
@@ -647,16 +651,19 @@ CREATE OR REPLACE FUNCTION process_moorage_queue_fn(IN _id integer) RETURNS void
WHERE id = moorage_rec.id; WHERE id = moorage_rec.id;
ELSE ELSE
RAISE NOTICE 'Insert new moorage entry from stay %', stay_rec; RAISE NOTICE 'Insert new moorage entry from stay %', stay_rec;
-- Ensure the stay as a name if lat,lon -- Set the moorage name and country if lat,lon
IF stay_rec.name IS NULL AND stay_rec.longitude IS NOT NULL AND stay_rec.latitude IS NOT NULL THEN IF stay_rec.longitude IS NOT NULL AND stay_rec.latitude IS NOT NULL THEN
stay_rec.name := reverse_geocode_py_fn('nominatim', stay_rec.longitude::NUMERIC, stay_rec.latitude::NUMERIC); geo := reverse_geocode_py_fn('nominatim', stay_rec.longitude::NUMERIC, stay_rec.latitude::NUMERIC);
moorage_rec.name = geo->>'name';
moorage_rec.country = geo->>'country_code';
END IF; END IF;
-- Insert new moorage from stay -- Insert new moorage from stay
INSERT INTO api.moorages INSERT INTO api.moorages
(vessel_id, name, stay_id, stay_code, stay_duration, reference_count, latitude, longitude, geog) (vessel_id, name, country, stay_id, stay_code, stay_duration, reference_count, latitude, longitude, geog)
VALUES ( VALUES (
stay_rec.vessel_id, stay_rec.vessel_id,
stay_rec.name, coalesce(moorage_rec.name, null),
coalesce(moorage_rec.country, null),
stay_rec.id, stay_rec.id,
stay_rec.stay_code, stay_rec.stay_code,
(stay_rec.departed::timestamp without time zone - stay_rec.arrived::timestamp without time zone), (stay_rec.departed::timestamp without time zone - stay_rec.arrived::timestamp without time zone),
@@ -846,7 +853,7 @@ COMMENT ON FUNCTION
public.process_vessel_queue_fn public.process_vessel_queue_fn
IS 'process new vessel notification'; IS 'process new vessel notification';
-- Get user settings details from a log entry -- Get application settings details from a log entry
DROP FUNCTION IF EXISTS get_app_settings_fn; DROP FUNCTION IF EXISTS get_app_settings_fn;
CREATE OR REPLACE FUNCTION get_app_settings_fn(OUT app_settings jsonb) CREATE OR REPLACE FUNCTION get_app_settings_fn(OUT app_settings jsonb)
RETURNS jsonb RETURNS jsonb
@@ -858,17 +865,37 @@ BEGIN
FROM FROM
public.app_settings public.app_settings
WHERE WHERE
name LIKE '%app.email%' name LIKE 'app.email%'
OR name LIKE '%app.pushover%' OR name LIKE 'app.pushover%'
OR name LIKE '%app.url' OR name LIKE 'app.url'
OR name LIKE '%app.telegram%'; OR name LIKE 'app.telegram%';
END; END;
$get_app_settings$ $get_app_settings$
LANGUAGE plpgsql; LANGUAGE plpgsql;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
public.get_app_settings_fn public.get_app_settings_fn
IS 'get app settings details, email, pushover, telegram'; IS 'get application settings details, email, pushover, telegram';
DROP FUNCTION IF EXISTS get_app_url_fn;
CREATE OR REPLACE FUNCTION get_app_url_fn(OUT app_settings jsonb)
RETURNS jsonb
AS $get_app_url$
DECLARE
BEGIN
SELECT
jsonb_object_agg(name, value) INTO app_settings
FROM
public.app_settings
WHERE
name = 'app.url';
END;
$get_app_url$
LANGUAGE plpgsql security definer;
-- Description
COMMENT ON FUNCTION
public.get_app_url_fn
IS 'get application url security definer';
-- Send notifications -- Send notifications
DROP FUNCTION IF EXISTS send_notification_fn; DROP FUNCTION IF EXISTS send_notification_fn;
@@ -961,7 +988,7 @@ AS $get_user_settings_from_vesselid$
FROM auth.accounts a, auth.vessels v, api.metadata m FROM auth.accounts a, auth.vessels v, api.metadata m
WHERE m.vessel_id = v.vessel_id WHERE m.vessel_id = v.vessel_id
AND m.vessel_id = vesselid AND m.vessel_id = vesselid
AND lower(a.email) = lower(v.owner_email); AND a.email = v.owner_email;
PERFORM set_config('user.email', user_settings->>'email'::TEXT, false); PERFORM set_config('user.email', user_settings->>'email'::TEXT, false);
PERFORM set_config('user.recipient', user_settings->>'recipient'::TEXT, false); PERFORM set_config('user.recipient', user_settings->>'recipient'::TEXT, false);
END; END;
@@ -1232,7 +1259,7 @@ CREATE OR REPLACE FUNCTION public.badges_geom_fn(IN logbook_id integer) RETURNS
user_settings jsonb; user_settings jsonb;
badge_tmp text; badge_tmp text;
begin begin
RAISE WARNING '--> user.email [%], vessel.id [%]', current_setting('user.email', false), current_setting('vessel.id', false); --RAISE NOTICE '--> public.badges_geom_fn user.email [%], vessel.id [%]', current_setting('user.email', false), current_setting('vessel.id', false);
-- Tropical & Alaska zone manually add into ne_10m_geography_marine_polys -- Tropical & Alaska zone manually add into ne_10m_geography_marine_polys
-- Check if each geographic marine zone exist as a badge -- Check if each geographic marine zone exist as a badge
FOR marine_rec IN FOR marine_rec IN
@@ -1312,7 +1339,7 @@ BEGIN
WHERE auth.accounts.email = _email; WHERE auth.accounts.email = _email;
IF account_rec.email IS NULL THEN IF account_rec.email IS NULL THEN
RAISE EXCEPTION 'Invalid user' RAISE EXCEPTION 'Invalid user'
USING HINT = 'Unknow user or password'; USING HINT = 'Unknown user or password';
END IF; END IF;
-- Set session variables -- Set session variables
PERFORM set_config('user.id', account_rec.user_id, false); PERFORM set_config('user.id', account_rec.user_id, false);
@@ -1390,7 +1417,7 @@ BEGIN
perform public.cron_process_new_moorage_fn(); perform public.cron_process_new_moorage_fn();
perform public.cron_process_monitor_offline_fn(); perform public.cron_process_monitor_offline_fn();
END END
$$ language plpgsql security definer; $$ language plpgsql;
--------------------------------------------------------------------------- ---------------------------------------------------------------------------
-- Delete all data for a account by email and vessel_id -- Delete all data for a account by email and vessel_id
@@ -1410,4 +1437,34 @@ BEGIN
delete from auth.accounts a where email = _email; delete from auth.accounts a where email = _email;
RETURN True; RETURN True;
END END
$delete_account$ language plpgsql security definer; $delete_account$ language plpgsql;
-- Dump all data for a account by email and vessel_id
CREATE OR REPLACE FUNCTION public.dump_account_fn(IN _email TEXT, IN _vessel_id TEXT) RETURNS BOOLEAN
AS $dump_account$
BEGIN
RETURN True;
-- TODO use COPY but we can't all in one?
select count(*) from api.metrics m where vessel_id = _vessel_id;
select * from api.metadata m where vessel_id = _vessel_id;
select * from api.logbook l where vessel_id = _vessel_id;
select * from api.moorages m where vessel_id = _vessel_id;
select * from api.stays s where vessel_id = _vessel_id;
select * from auth.vessels v where vessel_id = _vessel_id;
select * from auth.accounts a where email = _email;
END
$dump_account$ language plpgsql;
CREATE OR REPLACE FUNCTION public.delete_vessel_fn(IN _vessel_id TEXT) RETURNS BOOLEAN
AS $delete_vessel$
BEGIN
RETURN True;
select count(*) from api.metrics m where vessel_id = _vessel_id;
delete from api.metrics m where vessel_id = _vessel_id;
select * from api.metadata m where vessel_id = _vessel_id;
delete from api.metadata m where vessel_id = _vessel_id;
delete from api.logbook l where vessel_id = _vessel_id;
delete from api.moorages m where vessel_id = _vessel_id;
delete from api.stays s where vessel_id = _vessel_id;
END
$delete_vessel$ language plpgsql;

View File

@@ -13,6 +13,23 @@ CREATE SCHEMA IF NOT EXISTS public;
--------------------------------------------------------------------------- ---------------------------------------------------------------------------
-- basic helpers to check type and more -- basic helpers to check type and more
-- --
CREATE OR REPLACE FUNCTION public.isdouble(text) RETURNS BOOLEAN AS
$isdouble$
DECLARE x DOUBLE PRECISION;
BEGIN
x = $1::DOUBLE PRECISION;
RETURN TRUE;
EXCEPTION WHEN others THEN
RETURN FALSE;
END;
$isdouble$
STRICT
LANGUAGE plpgsql IMMUTABLE;
-- Description
COMMENT ON FUNCTION
public.isdouble
IS 'Check typeof value is double';
CREATE OR REPLACE FUNCTION public.isnumeric(text) RETURNS BOOLEAN AS CREATE OR REPLACE FUNCTION public.isnumeric(text) RETURNS BOOLEAN AS
$isnumeric$ $isnumeric$
DECLARE x NUMERIC; DECLARE x NUMERIC;

View File

@@ -17,7 +17,7 @@ CREATE SCHEMA IF NOT EXISTS public;
-- --
DROP FUNCTION IF EXISTS reverse_geocode_py_fn; DROP FUNCTION IF EXISTS reverse_geocode_py_fn;
CREATE OR REPLACE FUNCTION reverse_geocode_py_fn(IN geocoder TEXT, IN lon NUMERIC, IN lat NUMERIC, CREATE OR REPLACE FUNCTION reverse_geocode_py_fn(IN geocoder TEXT, IN lon NUMERIC, IN lat NUMERIC,
OUT geo_name TEXT) OUT geo jsonb)
AS $reverse_geocode_py$ AS $reverse_geocode_py$
import requests import requests
@@ -42,37 +42,44 @@ AS $reverse_geocode_py$
# Make the request to the geocoder API # Make the request to the geocoder API
# https://operations.osmfoundation.org/policies/nominatim/ # https://operations.osmfoundation.org/policies/nominatim/
payload = {"lon": lon, "lat": lat, "format": "jsonv2", "zoom": 18} payload = {"lon": lon, "lat": lat, "format": "jsonv2", "zoom": 18}
r = requests.get(url, params=payload) # https://nominatim.org/release-docs/latest/api/Reverse/
r = requests.get(url, headers = {"Accept-Language": "en-US,en;q=0.5"}, params=payload)
# Return the full address or nothing if not found # Parse response
# Option1: If name is null fallback to address field road,neighbourhood,suburb # Option1: If name is null fallback to address field road,neighbourhood,suburb
# Option2: Return the json for future reference like country # Option2: Return the json for future reference like country
if r.status_code == 200 and "name" in r.json(): if r.status_code == 200 and "name" in r.json():
r_dict = r.json() r_dict = r.json()
#plpy.notice('reverse_geocode_py_fn Parameters [{}] [{}] Response'.format(lon, lat, r_dict))
output = None
country_code = None
if "country_code" in r_dict["address"] and r_dict["address"]["country_code"]:
country_code = r_dict["address"]["country_code"]
if r_dict["name"]: if r_dict["name"]:
return r_dict["name"] return { "name": r_dict["name"], "country_code": country_code }
elif "address" in r_dict and r_dict["address"]: elif "address" in r_dict and r_dict["address"]:
if "road" in r_dict["address"] and r_dict["address"]["road"]: if "neighbourhood" in r_dict["address"] and r_dict["address"]["neighbourhood"]:
return r_dict["address"]["road"] return { "name": r_dict["address"]["neighbourhood"], "country_code": country_code }
elif "neighbourhood" in r_dict["address"] and r_dict["address"]["neighbourhood"]: elif "road" in r_dict["address"] and r_dict["address"]["road"]:
return r_dict["address"]["neighbourhood"] return { "name": r_dict["address"]["road"], "country_code": country_code }
elif "suburb" in r_dict["address"] and r_dict["address"]["suburb"]: elif "suburb" in r_dict["address"] and r_dict["address"]["suburb"]:
return r_dict["address"]["suburb"] return { "name": r_dict["address"]["suburb"], "country_code": country_code }
elif "residential" in r_dict["address"] and r_dict["address"]["residential"]: elif "residential" in r_dict["address"] and r_dict["address"]["residential"]:
return r_dict["address"]["residential"] return { "name": r_dict["address"]["residential"], "country_code": country_code }
elif "village" in r_dict["address"] and r_dict["address"]["village"]: elif "village" in r_dict["address"] and r_dict["address"]["village"]:
return r_dict["address"]["village"] return { "name": r_dict["address"]["village"], "country_code": country_code }
elif "town" in r_dict["address"] and r_dict["address"]["town"]: elif "town" in r_dict["address"] and r_dict["address"]["town"]:
return r_dict["address"]["town"] return { "name": r_dict["address"]["town"], "country_code": country_code }
else: else:
return 'n/a' return { "name": "n/a", "country_code": country_code }
else: else:
return 'n/a' return { "name": "n/a", "country_code": country_code }
else: else:
plpy.warning('Failed to received a geo full address %s', r.json()) plpy.warning('Failed to received a geo full address %s', r.json())
#plpy.error('Failed to received a geo full address %s', r.json()) #plpy.error('Failed to received a geo full address %s', r.json())
return 'unknow' return { "name": "unknown", "country_code": "unknown" }
$reverse_geocode_py$ LANGUAGE plpython3u; $reverse_geocode_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description -- Description
COMMENT ON FUNCTION COMMENT ON FUNCTION
public.reverse_geocode_py_fn public.reverse_geocode_py_fn
@@ -157,7 +164,7 @@ AS $send_email_py$
# Send the message via our own SMTP server. # Send the message via our own SMTP server.
try: try:
# send your message with credentials specified above # send your message with credentials specified above
with smtplib.SMTP(server_smtp, 25) as server: with smtplib.SMTP(server_smtp, 587) as server:
if 'app.email_user' in app and app['app.email_user'] \ if 'app.email_user' in app and app['app.email_user'] \
and 'app.email_pass' in app and app['app.email_pass']: and 'app.email_pass' in app and app['app.email_pass']:
server.starttls() server.starttls()
@@ -358,7 +365,7 @@ AS $reverse_geoip_py$
r = requests.get(url) r = requests.get(url)
#print(r.text) #print(r.text)
# Return something boolean? # Return something boolean?
#plpy.notice('IP [{}] [{}]'.format(_ip, r.status_code)) plpy.warning('IP [{}] [{}]'.format(_ip, r.status_code))
if r.status_code == 200: if r.status_code == 200:
#plpy.notice('Got [{}] [{}]'.format(r.text, r.status_code)) #plpy.notice('Got [{}] [{}]'.format(r.text, r.status_code))
return r.text; return r.text;

View File

@@ -21,13 +21,13 @@ CREATE EXTENSION IF NOT EXISTS "pgcrypto"; -- provides cryptographic functions
DROP TABLE IF EXISTS auth.accounts CASCADE; DROP TABLE IF EXISTS auth.accounts CASCADE;
CREATE TABLE IF NOT EXISTS auth.accounts ( CREATE TABLE IF NOT EXISTS auth.accounts (
userid UUID NOT NULL UNIQUE DEFAULT uuid_generate_v4(), public_id SERIAL UNIQUE NOT NULL,
user_id TEXT NOT NULL UNIQUE DEFAULT RIGHT(gen_random_uuid()::text, 12), user_id TEXT NOT NULL UNIQUE DEFAULT RIGHT(gen_random_uuid()::text, 12),
email CITEXT primary key check ( email ~* '^.+@.+\..+$' ), email CITEXT PRIMARY KEY CHECK ( email ~* '^.+@.+\..+$' ),
first text not null check (length(pass) < 512), first TEXT NOT NULL CHECK (length(pass) < 512),
last text not null check (length(pass) < 512), last TEXT NOT NULL CHECK (length(pass) < 512),
pass text not null check (length(pass) < 512), pass TEXT NOT NULL CHECK (length(pass) < 512),
role name not null check (length(role) < 512), role name NOT NULL CHECK (length(role) < 512),
preferences JSONB NULL DEFAULT '{"email_notifications":true}', preferences JSONB NULL DEFAULT '{"email_notifications":true}',
created_at TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW(), created_at TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW(), updated_at TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW(),
@@ -44,9 +44,11 @@ COMMENT ON TABLE
-- Indexes -- Indexes
-- is unused index? -- is unused index?
--CREATE INDEX accounts_role_idx ON auth.accounts (role); --CREATE INDEX accounts_role_idx ON auth.accounts (role);
CREATE INDEX accounts_preferences_idx ON auth.accounts using GIN (preferences); CREATE INDEX accounts_preferences_idx ON auth.accounts USING GIN (preferences);
-- is unused index? CREATE INDEX accounts_public_id_idx ON auth.accounts (public_id);
--CREATE INDEX accounts_userid_idx ON auth.accounts (userid); COMMENT ON COLUMN auth.accounts.public_id IS 'User public_id to allow mapping for anonymous access, could be use as well for as Grafana orgId';
COMMENT ON COLUMN auth.accounts.first IS 'User first name with CONSTRAINT CHECK';
COMMENT ON COLUMN auth.accounts.last IS 'User last name with CONSTRAINT CHECK';
CREATE TRIGGER accounts_moddatetime CREATE TRIGGER accounts_moddatetime
BEFORE UPDATE ON auth.accounts BEFORE UPDATE ON auth.accounts
@@ -183,7 +185,10 @@ begin
-- check email and password -- check email and password
select auth.user_role(email, pass) into _role; select auth.user_role(email, pass) into _role;
if _role is null then if _role is null then
raise invalid_password using message = 'invalid user or password'; -- HTTP/403
--raise invalid_password using message = 'invalid user or password';
-- HTTP/401
raise insufficient_privilege using message = 'invalid user or password';
end if; end if;
-- Get app_jwt_secret -- Get app_jwt_secret

View File

@@ -25,7 +25,7 @@ COMMENT ON COLUMN api.metadata.vessel_id IS 'Link auth.vessels with api.metadata
-- List vessel -- List vessel
--TODO add geojson with position --TODO add geojson with position
DROP VIEW IF EXISTS api.vessels_view; DROP VIEW IF EXISTS api.vessels_view;
CREATE OR REPLACE VIEW api.vessels_view AS CREATE OR REPLACE VIEW api.vessels_view WITH (security_invoker=true,security_barrier=true) AS
WITH metadata AS ( WITH metadata AS (
SELECT COALESCE( SELECT COALESCE(
(SELECT m.time (SELECT m.time
@@ -38,7 +38,9 @@ CREATE OR REPLACE VIEW api.vessels_view AS
v.name as name, v.name as name,
v.mmsi as mmsi, v.mmsi as mmsi,
v.created_at::timestamp(0) as created_at, v.created_at::timestamp(0) as created_at,
m.last_contact as last_contact m.last_contact as last_contact,
((NOW() AT TIME ZONE 'UTC' - m.last_contact::timestamp without time zone) > INTERVAL '70 MINUTES') as offline,
(NOW() AT TIME ZONE 'UTC' - m.last_contact::timestamp without time zone) as duration
FROM auth.vessels v, metadata m FROM auth.vessels v, metadata m
WHERE v.owner_email = current_setting('user.email'); WHERE v.owner_email = current_setting('user.email');
-- Description -- Description
@@ -94,10 +96,11 @@ AS $vessel$
BEGIN BEGIN
SELECT SELECT
jsonb_build_object( jsonb_build_object(
'name', v.name, 'name', coalesce(m.name, null),
'mmsi', coalesce(v.mmsi, null), 'mmsi', coalesce(m.mmsi, null),
'created_at', v.created_at::timestamp(0), 'created_at', v.created_at::timestamp(0),
'last_contact', coalesce(m.time, null), 'first_contact', coalesce(m.created_at::timestamp(0), null),
'last_contact', coalesce(m.time::timestamp(0), null),
'geojson', coalesce(ST_AsGeoJSON(geojson_t.*)::json, null) 'geojson', coalesce(ST_AsGeoJSON(geojson_t.*)::json, null)
)::jsonb || api.vessel_details_fn()::jsonb )::jsonb || api.vessel_details_fn()::jsonb
INTO vessel INTO vessel
@@ -115,7 +118,7 @@ AS $vessel$
latitude IS NOT NULL latitude IS NOT NULL
AND longitude IS NOT NULL AND longitude IS NOT NULL
AND vessel_id = current_setting('vessel.id', false) AND vessel_id = current_setting('vessel.id', false)
ORDER BY time DESC ORDER BY time DESC LIMIT 1
) AS geojson_t ) AS geojson_t
WHERE WHERE
m.vessel_id = current_setting('vessel.id') m.vessel_id = current_setting('vessel.id')
@@ -137,8 +140,9 @@ AS $user_settings$
from ( from (
select a.email, a.first, a.last, a.preferences, a.created_at, select a.email, a.first, a.last, a.preferences, a.created_at,
INITCAP(CONCAT (LEFT(first, 1), ' ', last)) AS username, INITCAP(CONCAT (LEFT(first, 1), ' ', last)) AS username,
public.has_vessel_fn() as has_vessel public.has_vessel_fn() as has_vessel,
--public.has_vessel_metadata_fn() as has_vessel_metadata, --public.has_vessel_metadata_fn() as has_vessel_metadata,
a.public_id
from auth.accounts a from auth.accounts a
where email = current_setting('user.email') where email = current_setting('user.email')
) row; ) row;
@@ -230,15 +234,16 @@ $vessel_details$
DECLARE DECLARE
BEGIN BEGIN
RETURN ( WITH tbl AS ( RETURN ( WITH tbl AS (
SELECT mmsi,ship_type,length,beam,height FROM api.metadata WHERE vessel_id = current_setting('vessel.id', false) SELECT mmsi,ship_type,length,beam,height,plugin_version FROM api.metadata WHERE vessel_id = current_setting('vessel.id', false)
) )
SELECT json_build_object( SELECT json_build_object(
'ship_type', (SELECT ais.description FROM aistypes ais, tbl WHERE t.ship_type = ais.id), 'ship_type', (SELECT ais.description FROM aistypes ais, tbl t WHERE t.ship_type = ais.id),
'country', (SELECT mid.country FROM mid, tbl WHERE LEFT(cast(mmsi as text), 3)::NUMERIC = mid.id), 'country', (SELECT mid.country FROM mid, tbl t WHERE LEFT(cast(t.mmsi as text), 3)::NUMERIC = mid.id),
'alpha_2', (SELECT o.alpha_2 FROM mid m, iso3166 o, tbl WHERE LEFT(cast(mmsi as text), 3)::NUMERIC = m.id AND m.country_id = o.id), 'alpha_2', (SELECT o.alpha_2 FROM mid m, iso3166 o, tbl t WHERE LEFT(cast(t.mmsi as text), 3)::NUMERIC = m.id AND m.country_id = o.id),
'length', t.ship_type, 'length', t.ship_type,
'beam', t.beam, 'beam', t.beam,
'height', t.height) 'height', t.height,
'plugin_version', t.plugin_version)
FROM tbl t FROM tbl t
); );
END; END;
@@ -251,11 +256,85 @@ COMMENT ON FUNCTION
DROP VIEW IF EXISTS api.eventlogs_view; DROP VIEW IF EXISTS api.eventlogs_view;
CREATE VIEW api.eventlogs_view WITH (security_invoker=true,security_barrier=true) AS CREATE VIEW api.eventlogs_view WITH (security_invoker=true,security_barrier=true) AS
SELECT pq.* SELECT pq.*
from public.process_queue pq FROM public.process_queue pq
where ref_id = current_setting('user.id', true) WHERE ref_id = current_setting('user.id', true)
or ref_id = current_setting('vessel.id', true) OR ref_id = current_setting('vessel.id', true)
order by id asc; ORDER BY id ASC;
-- Description -- Description
COMMENT ON VIEW COMMENT ON VIEW
api.eventlogs_view api.eventlogs_view
IS 'Event logs view'; IS 'Event logs view';
DROP FUNCTION IF EXISTS api.update_logbook_observations_fn;
-- Update/Add a specific user observations into logbook
CREATE OR REPLACE FUNCTION api.update_logbook_observations_fn(IN _id INT, IN observations TEXT) RETURNS BOOLEAN AS
$update_logbook_observations$
DECLARE
BEGIN
-- Merge existing observations with the new observations objects
RAISE NOTICE '-> update_logbook_extra_fn id:[%] observations:[%]', _id, observations;
-- { 'observations': { 'seaState': -1, 'cloudCoverage': -1, 'visibility': -1 } }
UPDATE api.logbook SET extra = public.jsonb_recursive_merge(extra, observations::jsonb) WHERE id = _id;
IF FOUND IS True THEN
RETURN True;
END IF;
RETURN False;
END;
$update_logbook_observations$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
api.update_logbook_observations_fn
IS 'Update/Add logbook observations jsonb key pair value';
CREATE TYPE public_type AS ENUM ('public_logs', 'public_logs_list', 'public_timelapse', 'public_stats');
CREATE FUNCTION api.ispublic_fn(IN id INTEGER, IN _type public_type) RETURNS BOOLEAN AS $ispublic$
DECLARE
_id INTEGER := id;
rec record;
valid_public_type BOOLEAN := False;
BEGIN
-- If _id is is not NULL and > 0
IF _id IS NULL OR _id < 1 THEN
RAISE WARNING '-> ispublic_fn invalid input %', _id;
RETURN False;
END IF;
-- Check if public_type is valid enum
SELECT _type::name = any(enum_range(null::public_type)::name[]) INTO valid_public_type;
IF valid_public_type IS False THEN
-- Ignore entry if type is invalid
RAISE WARNING '-> ispublic_fn invalid input type %', _type;
RETURN False;
END IF;
IF _type = 'public_logs' THEN
WITH log as (
select vessel_id from api.logbook l where l.id = _id
)
SELECT (l.vessel_id) is not null into rec
--SELECT l.vessel_id, 'email', 'settings', a.preferences
FROM auth.accounts a, auth.vessels v, jsonb_each_text(a.preferences), log l
WHERE v.vessel_id = l.vessel_id
AND a.email = v.owner_email
AND key = 'public_logs'::TEXT
AND value::BOOLEAN = true;
IF FOUND THEN
RETURN True;
END IF;
ELSE
SELECT (a.email) is not null into rec
--SELECT a.email, a.preferences
FROM auth.accounts a, jsonb_each_text(a.preferences)
WHERE a.public_id = _id
AND key = _type::TEXT
AND value::BOOLEAN = true;
IF FOUND THEN
RETURN True;
END IF;
END IF;
RETURN False;
END
$ispublic$ language plpgsql security definer;
-- Description
COMMENT ON FUNCTION
api.ispublic_fn
IS 'Is web page publicly accessible?';

View File

@@ -117,6 +117,7 @@ CREATE OR REPLACE FUNCTION api.reset(in pass text, in token text, in uuid text)
AS $reset_fn$ AS $reset_fn$
DECLARE DECLARE
_email TEXT := NULL; _email TEXT := NULL;
_pass TEXT := pass;
BEGIN BEGIN
-- Check parameters -- Check parameters
IF token IS NULL OR uuid IS NULL OR pass IS NULL THEN IF token IS NULL OR uuid IS NULL OR pass IS NULL THEN
@@ -131,7 +132,7 @@ AS $reset_fn$
END IF; END IF;
-- Set user new password -- Set user new password
UPDATE auth.accounts UPDATE auth.accounts
SET pass = pass SET pass = _pass
WHERE email = _email; WHERE email = _email;
-- Enable email_validation into user preferences -- Enable email_validation into user preferences
PERFORM api.update_user_preferences_fn('{email_valid}'::TEXT, True::TEXT); PERFORM api.update_user_preferences_fn('{email_valid}'::TEXT, True::TEXT);

View File

@@ -15,13 +15,13 @@ select current_database();
-- --
-- api_anonymous -- api_anonymous
-- nologin -- nologin
-- api_anonymous role in the database with which to execute anonymous web requests, limit 10 connections -- api_anonymous role in the database with which to execute anonymous web requests, limit 20 connections
-- api_anonymous allows JWT token generation with an expiration time via function api.login() from auth.accounts table -- api_anonymous allows JWT token generation with an expiration time via function api.login() from auth.accounts table
create role api_anonymous WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOLOGIN NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 10; create role api_anonymous WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOLOGIN NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 20;
comment on role api_anonymous is comment on role api_anonymous is
'The role that PostgREST will switch to when a user is not authenticated.'; 'The role that PostgREST will switch to when a user is not authenticated.';
-- Limit to 10 connections -- Limit to 20 connections
--alter user api_anonymous connection limit 10; --alter user api_anonymous connection limit 20;
grant usage on schema api to api_anonymous; grant usage on schema api to api_anonymous;
-- explicitly limit EXECUTE privileges to only signup and login and reset functions -- explicitly limit EXECUTE privileges to only signup and login and reset functions
grant execute on function api.login(text,text) to api_anonymous; grant execute on function api.login(text,text) to api_anonymous;
@@ -46,25 +46,28 @@ comment on role authenticator is
'Role that serves as an entry-point for API servers such as PostgREST.'; 'Role that serves as an entry-point for API servers such as PostgREST.';
grant api_anonymous to authenticator; grant api_anonymous to authenticator;
-- Grafana user and role with login, read-only, limit 15 connections -- Grafana user and role with login, read-only, limit 20 connections
CREATE ROLE grafana WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 15 LOGIN PASSWORD 'mysecretpassword'; CREATE ROLE grafana WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 20 LOGIN PASSWORD 'mysecretpassword';
comment on role grafana is comment on role grafana is
'Role that grafana will use for authenticated web users.'; 'Role that grafana will use for authenticated web users.';
-- Allow API schema and Tables -- Allow API schema and Tables
GRANT USAGE ON SCHEMA api TO grafana; GRANT USAGE ON SCHEMA api TO grafana;
-- Allow read on SEQUENCE on API schema
GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO grafana; GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO grafana;
GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata TO grafana; -- Allow read on TABLES on API schema
GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata,api.stays_at TO grafana;
-- Allow read on VIEWS on API schema -- Allow read on VIEWS on API schema
GRANT SELECT ON TABLE api.logs_view,api.moorages_view,api.stays_view TO grafana; GRANT SELECT ON TABLE api.logs_view,api.moorages_view,api.stays_view TO grafana;
GRANT SELECT ON TABLE api.log_view,api.moorage_view,api.stay_view,api.vessels_view TO grafana; GRANT SELECT ON TABLE api.log_view,api.moorage_view,api.stay_view,api.vessels_view TO grafana;
GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata,api.stays_at TO grafana; GRANT SELECT ON TABLE api.monitoring_view,api.monitoring_view2,api.monitoring_view3 TO grafana;
GRANT SELECT ON TABLE api.monitoring_humidity,api.monitoring_voltage,api.monitoring_temperatures TO grafana;
-- Allow Auth schema and Tables -- Allow Auth schema and Tables
GRANT USAGE ON SCHEMA auth TO grafana; GRANT USAGE ON SCHEMA auth TO grafana;
GRANT SELECT ON TABLE auth.vessels TO grafana; GRANT SELECT ON TABLE auth.vessels TO grafana;
GRANT EXECUTE ON FUNCTION public.citext_eq(citext, citext) TO grafana; GRANT EXECUTE ON FUNCTION public.citext_eq(citext, citext) TO grafana;
-- Grafana_auth authenticator user and role with login, read-only on auth.accounts, limit 15 connections -- Grafana_auth authenticator user and role with login, read-only on auth.accounts, limit 20 connections
CREATE ROLE grafana_auth WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 15 LOGIN PASSWORD 'mysecretpassword'; CREATE ROLE grafana_auth WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION CONNECTION LIMIT 20 LOGIN PASSWORD 'mysecretpassword';
comment on role grafana_auth is comment on role grafana_auth is
'Role that grafana auth proxy authenticator via apache.'; 'Role that grafana auth proxy authenticator via apache.';
-- Allow read on VIEWS on API schema -- Allow read on VIEWS on API schema
@@ -79,29 +82,25 @@ GRANT EXECUTE ON FUNCTION public.citext_eq(citext, citext) TO grafana_auth;
-- User: -- User:
-- nologin, web api only -- nologin, web api only
-- read-only for all and Read-Write on logbook, stays and moorage except for specific (name, notes) COLUMNS -- read-only for all and Read on logbook, stays and moorage and Write only for specific (name, notes) COLUMNS
CREATE ROLE user_role WITH NOLOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION; CREATE ROLE user_role WITH NOLOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOBYPASSRLS NOREPLICATION;
comment on role user_role is comment on role user_role is
'Role that PostgREST will switch to for authenticated web users.'; 'Role that PostgREST will switch to for authenticated web users.';
GRANT user_role to authenticator; GRANT user_role to authenticator;
GRANT USAGE ON SCHEMA api TO user_role; GRANT USAGE ON SCHEMA api TO user_role;
-- Allow read on SEQUENCE on API schema
GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO user_role; GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO user_role;
-- Allow read on TABLES on API schema
GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata,api.stays_at TO user_role; GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata,api.stays_at TO user_role;
GRANT SELECT ON TABLE public.process_queue TO user_role; GRANT SELECT ON TABLE public.process_queue TO user_role;
-- To check? -- To check?
GRANT SELECT ON TABLE auth.vessels TO user_role; GRANT SELECT ON TABLE auth.vessels TO user_role;
-- Allow users to update certain columns -- Allow users to update certain columns on specific TABLES on API schema
GRANT UPDATE (name, notes) ON api.logbook TO user_role; GRANT UPDATE (name, notes) ON api.logbook TO user_role;
GRANT UPDATE (name, notes, stay_code) ON api.stays TO user_role; GRANT UPDATE (name, notes, stay_code) ON api.stays TO user_role;
GRANT UPDATE (name, notes, stay_code, home_flag) ON api.moorages TO user_role; GRANT UPDATE (name, notes, stay_code, home_flag) ON api.moorages TO user_role;
-- Allow EXECUTE on all FUNCTIONS on API and public schema
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role; GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
-- explicitly limit EXECUTE privileges to pgrest db-pre-request function
--GRANT EXECUTE ON FUNCTION public.check_jwt() TO user_role;
-- Allow others functions or allow all in public !! ??
--GRANT EXECUTE ON FUNCTION api.export_logbook_geojson_linestring_fn(int4) TO user_role;
--GRANT EXECUTE ON FUNCTION public.st_asgeojson(text) TO user_role;
--GRANT EXECUTE ON FUNCTION public.geography_eq(geography, geography) TO user_role;
-- TODO should not be need !! ??
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_role; GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user_role;
-- pg15 feature security_invoker=true,security_barrier=true -- pg15 feature security_invoker=true,security_barrier=true
@@ -109,33 +108,12 @@ GRANT SELECT ON TABLE api.logs_view,api.moorages_view,api.stays_view TO user_rol
GRANT SELECT ON TABLE api.log_view,api.moorage_view,api.stay_view,api.vessels_view TO user_role; GRANT SELECT ON TABLE api.log_view,api.moorage_view,api.stay_view,api.vessels_view TO user_role;
GRANT SELECT ON TABLE api.monitoring_view,api.monitoring_view2,api.monitoring_view3 TO user_role; GRANT SELECT ON TABLE api.monitoring_view,api.monitoring_view2,api.monitoring_view3 TO user_role;
GRANT SELECT ON TABLE api.monitoring_humidity,api.monitoring_voltage,api.monitoring_temperatures TO user_role; GRANT SELECT ON TABLE api.monitoring_humidity,api.monitoring_voltage,api.monitoring_temperatures TO user_role;
GRANT SELECT ON TABLE api.stats_moorages_away_view,api.versions_view TO user_role;
GRANT SELECT ON TABLE api.total_info_view TO user_role; GRANT SELECT ON TABLE api.total_info_view TO user_role;
GRANT SELECT ON TABLE api.stats_logs_view TO user_role; GRANT SELECT ON TABLE api.stats_logs_view TO user_role;
GRANT SELECT ON TABLE api.stats_moorages_view TO user_role; GRANT SELECT ON TABLE api.stats_moorages_view TO user_role;
GRANT SELECT ON TABLE api.eventlogs_view TO user_role; GRANT SELECT ON TABLE api.eventlogs_view TO user_role;
-- Update ownership for security user_role as run by web user. GRANT SELECT ON TABLE api.vessels_view TO user_role;
-- Web listing
--ALTER VIEW api.stays_view OWNER TO user_role;
--ALTER VIEW api.moorages_view OWNER TO user_role;
--ALTER VIEW api.logs_view OWNER TO user_role;
--ALTER VIEW api.vessel_p_view OWNER TO user_role;
--ALTER VIEW api.monitoring_view OWNER TO user_role;
-- Remove all permissions except select
--REVOKE UPDATE, TRUNCATE, REFERENCES, DELETE, TRIGGER, INSERT ON TABLE api.stays_view FROM user_role;
--REVOKE UPDATE, TRUNCATE, REFERENCES, DELETE, TRIGGER, INSERT ON TABLE api.moorages_view FROM user_role;
--REVOKE UPDATE, TRUNCATE, REFERENCES, DELETE, TRIGGER, INSERT ON TABLE api.logs_view FROM user_role;
--REVOKE UPDATE, TRUNCATE, REFERENCES, DELETE, TRIGGER, INSERT ON TABLE api.monitoring_view FROM user_role;
-- Allow read and update on VIEWS
-- Web detail view
--ALTER VIEW api.log_view OWNER TO user_role;
-- Remove all permissions except select and update
--REVOKE TRUNCATE, DELETE, TRIGGER, INSERT ON TABLE api.log_view FROM user_role;
ALTER VIEW api.vessels_view OWNER TO user_role;
-- Remove all permissions except select and update
REVOKE TRUNCATE, DELETE, TRIGGER, INSERT ON TABLE api.vessels_view FROM user_role;
-- Vessel: -- Vessel:
-- nologin -- nologin
@@ -145,8 +123,10 @@ comment on role vessel_role is
'Role that PostgREST will switch to for authenticated web vessels.'; 'Role that PostgREST will switch to for authenticated web vessels.';
GRANT vessel_role to authenticator; GRANT vessel_role to authenticator;
GRANT USAGE ON SCHEMA api TO vessel_role; GRANT USAGE ON SCHEMA api TO vessel_role;
GRANT INSERT, UPDATE, SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata TO vessel_role; -- Allow read on SEQUENCE on API schema
GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO vessel_role; GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO vessel_role;
-- Allow read/write on TABLES on API schema
GRANT INSERT, UPDATE, SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata TO vessel_role;
GRANT INSERT ON TABLE public.process_queue TO vessel_role; GRANT INSERT ON TABLE public.process_queue TO vessel_role;
GRANT USAGE, SELECT ON SEQUENCE public.process_queue_id_seq TO vessel_role; GRANT USAGE, SELECT ON SEQUENCE public.process_queue_id_seq TO vessel_role;
-- explicitly limit EXECUTE privileges to pgrest db-pre-request function -- explicitly limit EXECUTE privileges to pgrest db-pre-request function

View File

@@ -23,7 +23,7 @@ SELECT cron.schedule('cron_new_moorage', '*/7 * * * *', 'select public.cron_proc
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_moorage'; --UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_moorage';
-- Create a every 10 minute job cron_process_monitor_offline_fn -- Create a every 10 minute job cron_process_monitor_offline_fn
SELECT cron.schedule('cron_monitor_offline', '*/10 * * * *', 'select public.cron_process_monitor_offline_fn()'); SELECT cron.schedule('cron_monitor_offline', '*/11 * * * *', 'select public.cron_process_monitor_offline_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_monitor_offline'; --UPDATE cron.job SET database = 'signalk' where jobname = 'cron_monitor_offline';
-- Create a every 10 minute job cron_process_monitor_online_fn -- Create a every 10 minute job cron_process_monitor_online_fn
@@ -64,11 +64,19 @@ SELECT cron.schedule('cron_prune_otp', '*/15 * * * *', 'select public.cron_proce
-- Create a every 11 minute job cron_process_alerts_fn -- Create a every 11 minute job cron_process_alerts_fn
--SELECT cron.schedule('cron_alerts', '*/11 * * * *', 'select public.cron_process_alerts_fn()'); --SELECT cron.schedule('cron_alerts', '*/11 * * * *', 'select public.cron_process_alerts_fn()');
-- Notifications/Reminders of no vessel & no metadata & no activity
-- At 08:05 on Sunday.
-- At 08:05 on every 4th day-of-month if it's on Sunday.
SELECT cron.schedule('cron_no_vessel', '5 8 */4 * 0', 'select public.cron_process_no_vessel_fn()');
SELECT cron.schedule('cron_no_metadata', '5 8 */4 * 0', 'select public.cron_process_no_metadata_fn()');
SELECT cron.schedule('cron_no_activity', '5 8 */4 * 0', 'select public.cron_process_no_activity_fn()');
-- Cron job settings -- Cron job settings
UPDATE cron.job SET database = 'signalk'; UPDATE cron.job SET database = 'signalk';
UPDATE cron.job SET username = 'username'; -- TODO update to scheduler, pending process_queue update UPDATE cron.job SET username = 'username'; -- TODO update to scheduler, pending process_queue update
--UPDATE cron.job SET username = 'username' where jobname = 'cron_vacuum'; -- TODO Update to superuser for vaccuum permissions --UPDATE cron.job SET username = 'username' where jobname = 'cron_vacuum'; -- TODO Update to superuser for vaccuum permissions
UPDATE cron.job SET nodename = '/var/run/postgresql/'; -- VS default localhost ?? UPDATE cron.job SET nodename = '/var/run/postgresql/'; -- VS default localhost ??
UPDATE cron.job SET database = 'postgresql' WHERE jobname = 'job_run_details_cleanup_fn';
-- check job lists -- check job lists
SELECT * FROM cron.job; SELECT * FROM cron.job;
-- unschedule by job id -- unschedule by job id

View File

@@ -1 +1 @@
0.2.3 0.4.0

File diff suppressed because one or more lines are too long

View File

@@ -103,6 +103,14 @@ var moment = require('moment');
obj_name: null obj_name: null
} }
}, },
{ url: '/rpc/export_logbook_kml_fn',
payload: {
_id: 2
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_moorages_geojson_fn', { url: '/rpc/export_moorages_geojson_fn',
payload: {}, payload: {},
res: { res: {
@@ -293,6 +301,32 @@ var moment = require('moment');
obj_name: null obj_name: null
} }
}, },
{ url: '/rpc/export_logbook_kml_fn',
payload: {
_id: 4
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_logbooks_gpx_fn',
payload: {
start_log: 3,
end_log: 4
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_logbooks_kml_fn',
payload: {
start_log: 3,
end_log: 4
},
res: {
obj_name: null
}
},
{ url: '/rpc/export_moorages_geojson_fn', { url: '/rpc/export_moorages_geojson_fn',
payload: {}, payload: {},
res: { res: {

View File

@@ -25,7 +25,7 @@ SELECT set_config('vessel.id', :'vessel_id', false) IS NOT NULL as vessel_id;
\echo 'logbook' \echo 'logbook'
SELECT count(*) FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false); SELECT count(*) FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
\echo 'logbook' \echo 'logbook'
SELECT name,_from_time IS NOT NULL AS _from_time,_to_time IS NOT NULL AS _to_time, track_geojson IS NOT NULL AS track_geojson, track_gpx IS NOT NULL AS track_gpx, track_geom, distance,duration,avg_speed,max_speed,max_wind_speed,notes,extra FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false); SELECT name,_from_time IS NOT NULL AS _from_time,_to_time IS NOT NULL AS _to_time, track_geojson IS NOT NULL AS track_geojson, track_geom, distance,duration,avg_speed,max_speed,max_wind_speed,notes,extra FROM api.logbook WHERE vessel_id = current_setting('vessel.id', false);
-- Test stays for user -- Test stays for user
\echo 'stays' \echo 'stays'
@@ -35,9 +35,37 @@ SELECT active,name,geog,stay_code FROM api.stays WHERE vessel_id = current_setti
-- Test event logs view for user -- Test event logs view for user
\echo 'eventlogs_view' \echo 'eventlogs_view'
select count(*) from api.eventlogs_view; SELECT count(*) from api.eventlogs_view;
-- Test event logs view for user -- Test event logs view for user
\echo 'stats_logs_fn' \echo 'stats_logs_fn'
select api.stats_logs_fn(null, null); SELECT api.stats_logs_fn(null, null) INTO stats_jsonb;
select api.stats_logs_fn('2022-01-01'::text,'2022-06-12'::text); SELECT stats_logs_fn->'name' AS name,
stats_logs_fn->'count' AS count,
stats_logs_fn->'max_speed' As max_speed,
stats_logs_fn->'max_distance' AS max_distance,
stats_logs_fn->'max_duration' AS max_duration,
stats_logs_fn->'max_speed_id',
stats_logs_fn->'sum_distance',
stats_logs_fn->'sum_duration',
stats_logs_fn->'max_wind_speed',
stats_logs_fn->'max_distance_id',
stats_logs_fn->'max_duration_id',
stats_logs_fn->'max_wind_speed_id',
stats_logs_fn->'first_date' IS NOT NULL AS first_date,
stats_logs_fn->'last_date' IS NOT NULL AS last_date
FROM stats_jsonb;
DROP TABLE stats_jsonb;
SELECT api.stats_logs_fn('2022-01-01'::text,'2022-06-12'::text);
-- Update logbook observations
\echo 'update_logbook_observations_fn'
SELECT extra FROM api.logbook l WHERE id = 1 AND vessel_id = current_setting('vessel.id', false);
SELECT api.update_logbook_observations_fn(1, '{"observations":{"cloudCoverage":1}}'::TEXT);
SELECT extra FROM api.logbook l WHERE id = 1 AND vessel_id = current_setting('vessel.id', false);
-- Check export
--\echo 'check logbook export fn'
--SELECT api.export_logbook_geojson_fn(1);
--SELECT api.export_logbook_gpx_fn(1);
--SELECT api.export_logbook_kml_fn(1);

View File

@@ -17,14 +17,13 @@ count | 2
logbook logbook
-[ RECORD 1 ]--+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -[ RECORD 1 ]--+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
name | Bollsta to Strandallén name | Bollsta to Slottsbacken
_from_time | t _from_time | t
_to_time | t _to_time | t
track_geojson | t track_geojson | t
track_gpx | t
track_geom | 0102000020E61000001A00000020D26F5F0786374030BB270F0B094E400C6E7ED60F843740AA60545227084E40D60FC48C03823740593CE27D42074E407B39D9F322803740984C158C4A064E4091ED7C3F357E3740898BB63D54054E40A8A1208B477C37404BA3DC9059044E404C5CB4EDA17A3740C4F856115B034E40A9A44E4013793740D8F0F44A59024E40E4839ECDAA773740211FF46C56014E405408D147067637408229F03B73004E40787AA52C43743740F90FE9B7AFFF4D40F8098D4D18723740C217265305FF4D4084E82303537037409A2D464AA0FE4D4022474DCE636F37402912396A72FE4D408351499D806E374088CFB02B40FE4D4076711B0DE06D3740B356C7040FFE4D404EAC66B0BC6E374058A835CD3BFE4D40D7A3703D0A6F3740D3E10EC15EFE4D4087602F277B6E3740A779C7293AFE4D4087602F277B6E3740A779C7293AFE4D402063EE5A426E3740B5A679C729FE4D40381DEE10EC6D37409ECA7C1A0AFE4D40E2C46A06CB6B37400A43F7BF36FD4D4075931804566E3740320BDAD125FD4D409A2D464AA06E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D40 track_geom | 0102000020E61000001A00000020D26F5F0786374030BB270F0B094E400C6E7ED60F843740AA60545227084E40D60FC48C03823740593CE27D42074E407B39D9F322803740984C158C4A064E4091ED7C3F357E3740898BB63D54054E40A8A1208B477C37404BA3DC9059044E404C5CB4EDA17A3740C4F856115B034E40A9A44E4013793740D8F0F44A59024E40E4839ECDAA773740211FF46C56014E405408D147067637408229F03B73004E40787AA52C43743740F90FE9B7AFFF4D40F8098D4D18723740C217265305FF4D4084E82303537037409A2D464AA0FE4D4022474DCE636F37402912396A72FE4D408351499D806E374088CFB02B40FE4D4076711B0DE06D3740B356C7040FFE4D404EAC66B0BC6E374058A835CD3BFE4D40D7A3703D0A6F3740D3E10EC15EFE4D4087602F277B6E3740A779C7293AFE4D4087602F277B6E3740A779C7293AFE4D402063EE5A426E3740B5A679C729FE4D40381DEE10EC6D37409ECA7C1A0AFE4D40E2C46A06CB6B37400A43F7BF36FD4D4075931804566E3740320BDAD125FD4D409A2D464AA06E37404A5658830AFD4D40029A081B9E6E37404A5658830AFD4D40
distance | 7.17 distance | 7.17
duration | 00:25:00 duration | PT25M
avg_speed | 3.6961538461538455 avg_speed | 3.6961538461538455
max_speed | 6.1 max_speed | 6.1
max_wind_speed | 22.1 max_wind_speed | 22.1
@@ -35,10 +34,9 @@ name | Knipan to Ekenäs
_from_time | t _from_time | t
_to_time | t _to_time | t
track_geojson | t track_geojson | t
track_gpx | t
track_geom | 0102000020E6100000130000004806A6C0EF6C3740DA1B7C6132FD4D40FE65F7E461693740226C787AA5FC4D407DD3E10EC1663740B29DEFA7C6FB4D40898BB63D5465374068479724BCFA4D409A5271F6E1633740B6847CD0B3F94D40431CEBE236623740E9263108ACF84D402C6519E2585F37407E678EBFC7F74D4096218E75715B374027C5B45C23F74D402AA913D044583740968DE1C46AF64D405AF5B9DA8A5537407BEF829B9FF54D407449C2ABD253374086C954C1A8F44D407D1A0AB278543740F2B0506B9AF34D409D11A5BDC15737406688635DDCF24D4061C3D32B655937402CAF6F3ADCF14D408988888888583740B3319C58CDF04D4021FAC8C0145837408C94405DB7EF4D40B8F9593F105B37403DC0804BEDEE4D40DE4C5FE2A25D3740AE47E17A14EE4D40DE4C5FE2A25D3740AE47E17A14EE4D40 track_geom | 0102000020E6100000130000004806A6C0EF6C3740DA1B7C6132FD4D40FE65F7E461693740226C787AA5FC4D407DD3E10EC1663740B29DEFA7C6FB4D40898BB63D5465374068479724BCFA4D409A5271F6E1633740B6847CD0B3F94D40431CEBE236623740E9263108ACF84D402C6519E2585F37407E678EBFC7F74D4096218E75715B374027C5B45C23F74D402AA913D044583740968DE1C46AF64D405AF5B9DA8A5537407BEF829B9FF54D407449C2ABD253374086C954C1A8F44D407D1A0AB278543740F2B0506B9AF34D409D11A5BDC15737406688635DDCF24D4061C3D32B655937402CAF6F3ADCF14D408988888888583740B3319C58CDF04D4021FAC8C0145837408C94405DB7EF4D40B8F9593F105B37403DC0804BEDEE4D40DE4C5FE2A25D3740AE47E17A14EE4D40DE4C5FE2A25D3740AE47E17A14EE4D40
distance | 8.6862 distance | 8.6862
duration | 00:18:00 duration | PT18M
avg_speed | 6.026315789473684 avg_speed | 6.026315789473684
max_speed | 6.5 max_speed | 6.5
max_wind_speed | 37.2 max_wind_speed | 37.2
@@ -57,7 +55,7 @@ geog | 0101000020E6100000B0DEBBE0E68737404DA938FBF0094E40
stay_code | 2 stay_code | 2
-[ RECORD 2 ]------------------------------------------------- -[ RECORD 2 ]-------------------------------------------------
active | f active | f
name | Strandallén name | Slottsbacken
geog | 0101000020E6100000029A081B9E6E37404A5658830AFD4D40 geog | 0101000020E6100000029A081B9E6E37404A5658830AFD4D40
stay_code | 1 stay_code | 1
-[ RECORD 3 ]------------------------------------------------- -[ RECORD 3 ]-------------------------------------------------
@@ -71,9 +69,34 @@ eventlogs_view
count | 13 count | 13
stats_logs_fn stats_logs_fn
-[ RECORD 1 ]-+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT 1
stats_logs_fn | {"count": 4, "max_speed": 7.1, "max_distance": 8.6862, "max_duration": "01:11:00", "max_speed_id": 3, "sum_duration": "02:37:00", "max_wind_speed": 44.2, "max_distance_id": 2, "max_wind_speed_id": 4} -[ RECORD 1 ]+----------
name | "kapla"
count | 4
max_speed | 7.1
max_distance | 8.6862
max_duration | "PT1H11M"
?column? | 3
?column? | 29.2865
?column? | "PT2H37M"
?column? | 44.2
?column? | 2
?column? | 4
?column? | 4
first_date | t
last_date | t
DROP TABLE
-[ RECORD 1 ]-+- -[ RECORD 1 ]-+-
stats_logs_fn | stats_logs_fn |
update_logbook_observations_fn
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": 10}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": -1}}
-[ RECORD 1 ]------------------+--
update_logbook_observations_fn | t
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------
extra | {"metrics": {"propulsion.main.runTime": 10}, "observations": {"seaState": -1, "visibility": -1, "cloudCoverage": 1}}

View File

@@ -23,7 +23,7 @@ SELECT current_user, current_setting('user.email', true), current_setting('vesse
SELECT v.name,m.client_id FROM auth.accounts a JOIN auth.vessels v ON a.role = 'user_role' AND v.owner_email = a.email JOIN api.metadata m ON m.vessel_id = v.vessel_id; SELECT v.name,m.client_id FROM auth.accounts a JOIN auth.vessels v ON a.role = 'user_role' AND v.owner_email = a.email JOIN api.metadata m ON m.vessel_id = v.vessel_id;
\echo 'auth.accounts details' \echo 'auth.accounts details'
SELECT a.userid IS NOT NULL AS userid, a.user_id IS NOT NULL AS user_id, a.email, a.first, a.last, a.pass IS NOT NULL AS pass, a.role, a.preferences->'telegram'->'chat' AS telegram, a.preferences->'pushover_user_key' AS pushover_user_key FROM auth.accounts AS a; SELECT a.public_id IS NOT NULL AS public_id, a.user_id IS NOT NULL AS user_id, a.email, a.first, a.last, a.pass IS NOT NULL AS pass, a.role, a.preferences->'telegram'->'chat' AS telegram, a.preferences->'pushover_user_key' AS pushover_user_key FROM auth.accounts AS a;
\echo 'auth.vessels details' \echo 'auth.vessels details'
--SELECT 'SELECT ' || STRING_AGG('v.' || column_name, ', ') || ' FROM auth.vessels AS v' FROM information_schema.columns WHERE table_name = 'vessels' AND table_schema = 'auth' AND column_name NOT IN ('created_at', 'updated_at'); --SELECT 'SELECT ' || STRING_AGG('v.' || column_name, ', ') || ' FROM auth.vessels AS v' FROM information_schema.columns WHERE table_name = 'vessels' AND table_schema = 'auth' AND column_name NOT IN ('created_at', 'updated_at');
SELECT v.vessel_id IS NOT NULL AS vessel_id, v.owner_email, v.mmsi, v.name, v.role FROM auth.vessels AS v; SELECT v.vessel_id IS NOT NULL AS vessel_id, v.owner_email, v.mmsi, v.name, v.role FROM auth.vessels AS v;
@@ -60,7 +60,7 @@ SELECT m.id, m.name, m.mmsi, m.client_id, m.length, m.beam, m.height, m.ship_typ
\echo 'api.logs_view' \echo 'api.logs_view'
--SELECT * FROM api.logbook l; --SELECT * FROM api.logbook l;
--SELECT * FROM api.logs_view l; --SELECT * FROM api.logs_view l;
SELECT l.id, "Name", "From", "To", "Distance", "Duration" FROM api.logs_view AS l; SELECT l.id, l.name, l.from, l.to, l.distance, l.duration FROM api.logs_view AS l;
--SELECT * FROM api.log_view l; --SELECT * FROM api.log_view l;
\echo 'api.stays' \echo 'api.stays'

View File

@@ -23,7 +23,7 @@ client_id | vessels.urn:mrn:imo:mmsi:787654321
auth.accounts details auth.accounts details
-[ RECORD 1 ]-----+----------------------------- -[ RECORD 1 ]-----+-----------------------------
userid | t public_id | t
user_id | t user_id | t
email | demo+kapla@openplotter.cloud email | demo+kapla@openplotter.cloud
first | First_kapla first | First_kapla
@@ -33,7 +33,7 @@ role | user_role
telegram | telegram |
pushover_user_key | pushover_user_key |
-[ RECORD 2 ]-----+----------------------------- -[ RECORD 2 ]-----+-----------------------------
userid | t public_id | t
user_id | t user_id | t
email | demo+aava@openplotter.cloud email | demo+aava@openplotter.cloud
first | first_aava first | first_aava
@@ -127,18 +127,18 @@ active | t
api.logs_view api.logs_view
-[ RECORD 1 ]-------------- -[ RECORD 1 ]--------------
id | 2 id | 2
Name | Knipan to Ekenäs name | Knipan to Ekenäs
From | Knipan from | Knipan
To | Ekenäs to | Ekenäs
Distance | 8.6862 distance | 8.6862
Duration | 00:18:00 duration | PT18M
-[ RECORD 2 ]-------------- -[ RECORD 2 ]--------------
id | 1 id | 1
Name | patch log name 3 name | patch log name 3
From | Bollsta from | Bollsta
To | Strandallén to | Slottsbacken
Distance | 7.17 distance | 7.17
Duration | 00:25:00 duration | PT25M
api.stays api.stays
-[ RECORD 1 ]------------------------------------------------- -[ RECORD 1 ]-------------------------------------------------
@@ -158,7 +158,7 @@ notes | new stay note 3
id | 2 id | 2
vessel_id | t vessel_id | t
active | f active | f
name | Strandallén name | Slottsbacken
latitude | 59.97688333333333 latitude | 59.97688333333333
longitude | 23.4321 longitude | 23.4321
geog | 0101000020E6100000029A081B9E6E37404A5658830AFD4D40 geog | 0101000020E6100000029A081B9E6E37404A5658830AFD4D40
@@ -185,10 +185,10 @@ stays_view
-[ RECORD 1 ]+------------------ -[ RECORD 1 ]+------------------
id | 2 id | 2
name | t name | t
moorage | Strandallén moorage | Slottsbacken
moorage_id | 2 moorage_id | 2
duration | 00:03:00 duration | PT3M
stayed_at | Unknow stayed_at | Unknown
stayed_at_id | 1 stayed_at_id | 1
arrived | t arrived | t
departed | t departed | t
@@ -198,7 +198,7 @@ id | 1
name | t name | t
moorage | patch stay name 3 moorage | patch stay name 3
moorage_id | 1 moorage_id | 1
duration | 00:02:00 duration | PT2M
stayed_at | Anchor stayed_at | Anchor
stayed_at_id | 2 stayed_at_id | 2
arrived | t arrived | t
@@ -210,10 +210,10 @@ api.moorages
id | 1 id | 1
vessel_id | t vessel_id | t
name | patch moorage name 3 name | patch moorage name 3
country | country | fi
stay_id | 1 stay_id | 1
stay_code | 2 stay_code | 2
stay_duration | 00:02:00 stay_duration | PT2M
reference_count | 1 reference_count | 1
latitude | 60.077666666666666 latitude | 60.077666666666666
longitude | 23.530866666666668 longitude | 23.530866666666668
@@ -223,11 +223,11 @@ notes | new moorage note 3
-[ RECORD 2 ]---+--------------------------------------------------- -[ RECORD 2 ]---+---------------------------------------------------
id | 2 id | 2
vessel_id | t vessel_id | t
name | Strandallén name | Slottsbacken
country | country | fi
stay_id | 2 stay_id | 2
stay_code | 1 stay_code | 1
stay_duration | 00:03:00 stay_duration | PT3M
reference_count | 1 reference_count | 1
latitude | 59.97688333333333 latitude | 59.97688333333333
longitude | 23.4321 longitude | 23.4321
@@ -245,8 +245,8 @@ total_stay | 0
arrivals_departures | 1 arrivals_departures | 1
-[ RECORD 2 ]-------+--------------------- -[ RECORD 2 ]-------+---------------------
id | 2 id | 2
moorage | Strandallén moorage | Slottsbacken
default_stay | Unknow default_stay | Unknown
default_stay_id | 1 default_stay_id | 1
total_stay | 0 total_stay | 0
arrivals_departures | 1 arrivals_departures | 1

View File

@@ -6,7 +6,7 @@
You are now connected to database "signalk" as user "username". You are now connected to database "signalk" as user "username".
Expanded display is on. Expanded display is on.
-[ RECORD 1 ]--+------------------------------- -[ RECORD 1 ]--+-------------------------------
server_version | 15.4 (Debian 15.4-1.pgdg110+1) server_version | 15.4 (Debian 15.4-2.pgdg110+1)
-[ RECORD 1 ]--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -[ RECORD 1 ]--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
postgis_full_version | POSTGIS="3.4.0 0874ea3" [EXTENSION] PGSQL="150" GEOS="3.9.0-CAPI-1.16.2" PROJ="7.2.1 NETWORK_ENABLED=OFF URL_ENDPOINT=https://cdn.proj.org USER_WRITABLE_DIRECTORY=/var/lib/postgresql/.local/share/proj DATABASE_PATH=/usr/share/proj/proj.db" LIBXML="2.9.10" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)" postgis_full_version | POSTGIS="3.4.0 0874ea3" [EXTENSION] PGSQL="150" GEOS="3.9.0-CAPI-1.16.2" PROJ="7.2.1 NETWORK_ENABLED=OFF URL_ENDPOINT=https://cdn.proj.org USER_WRITABLE_DIRECTORY=/var/lib/postgresql/.local/share/proj DATABASE_PATH=/usr/share/proj/proj.db" LIBXML="2.9.10" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
@@ -53,7 +53,7 @@ Schema | public
Description | PostGIS geometry and geography spatial types and functions Description | PostGIS geometry and geography spatial types and functions
-[ RECORD 9 ]-------------------------------------------------------------------------------------- -[ RECORD 9 ]--------------------------------------------------------------------------------------
Name | timescaledb Name | timescaledb
Version | 2.11.2 Version | 2.12.2
Schema | public Schema | public
Description | Enables scalable inserts and complex queries for time-series data (Community Edition) Description | Enables scalable inserts and complex queries for time-series data (Community Edition)
-[ RECORD 10 ]------------------------------------------------------------------------------------- -[ RECORD 10 ]-------------------------------------------------------------------------------------
@@ -106,14 +106,14 @@ laninline | 13540
lanvalidator | 13541 lanvalidator | 13541
lanacl | lanacl |
-[ RECORD 5 ]-+----------- -[ RECORD 5 ]-+-----------
oid | 18174 oid | 18283
lanname | plpython3u lanname | plpython3u
lanowner | 10 lanowner | 10
lanispl | t lanispl | t
lanpltrusted | t lanpltrusted | t
lanplcallfoid | 18171 lanplcallfoid | 18280
laninline | 18172 laninline | 18281
lanvalidator | 18173 lanvalidator | 18282
lanacl | lanacl |
-[ RECORD 1 ]+----------- -[ RECORD 1 ]+-----------
@@ -592,17 +592,17 @@ qual | true
with_check | false with_check | false
Test nominatim reverse_geocode_py_fn Test nominatim reverse_geocode_py_fn
-[ RECORD 1 ]---------+------- -[ RECORD 1 ]---------+----------------------------------------
reverse_geocode_py_fn | España reverse_geocode_py_fn | {"name": "Spain", "country_code": "es"}
Test geoip reverse_geoip_py_fn Test geoip reverse_geoip_py_fn
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------- -[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------
versions_fn | {"api_version" : "0.2.3", "sys_version" : "PostgreSQL 15.4", "timescaledb" : "2.11.2", "postgis" : "3.4.0", "postgrest" : "PostgREST 11.2.0"} versions_fn | {"api_version" : "0.4.0", "sys_version" : "PostgreSQL 15.4", "timescaledb" : "2.12.2", "postgis" : "3.4.0", "postgrest" : "PostgREST 11.2.1"}
-[ RECORD 1 ]----------------- -[ RECORD 1 ]-----------------
api_version | 0.2.3 api_version | 0.4.0
sys_version | PostgreSQL 15.4 sys_version | PostgreSQL 15.4
timescaledb | 2.11.2 timescaledb | 2.12.2
postgis | 3.4.0 postgis | 3.4.0
postgrest | PostgREST 11.2.0 postgrest | PostgREST 11.2.1

View File

@@ -135,9 +135,19 @@ diff sql/monitoring.sql.output output/monitoring.sql.output > /dev/null
#diff -u sql/monitoring.sql.output output/monitoring.sql.output | wc -l #diff -u sql/monitoring.sql.output output/monitoring.sql.output | wc -l
#echo 0 #echo 0
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
echo OK echo SQL monitoring.sql OK
else else
echo SQL monitoring.sql FAILED echo SQL monitoring.sql FAILED
diff -u sql/monitoring.sql.output output/monitoring.sql.output diff -u sql/monitoring.sql.output output/monitoring.sql.output
exit 1 exit 1
fi fi
# Download and update openapi documentation
wget ${PGSAIL_API_URI} -O ../openapi.json
#echo 0
if [ $? -eq 0 ]; then
echo openapi.json OK
else
echo openapi.json FAILED
exit 1
fi