initial release

This commit is contained in:
xbgmsharp
2022-08-24 21:07:00 +00:00
commit 8e2c65681a
13 changed files with 3049 additions and 0 deletions

201
LICENSE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

129
README.md Normal file
View File

@@ -0,0 +1,129 @@
# PostgSail
Effortless cloud based solution for storing and sharing your SignalK data. Allow to effortlessly log your sails and monitor your boat.
### Context
It is all about SQL, object-relational, time-series, spatial database with a bit python.
### Features
- Automatically log your voyages without manually starting or stopping a trip.
- Automatically capture the details of your voyages (boat speed, heading, wind speed, etc).
- Timelapse video your trips!
- Add custom notes to your logs.
- Export to CSV or GPX and download your logs.
- Aggregate your trip statistics: Longest voyage, time spent at anchorages, home ports etc.
- See your moorages on a global map, with incoming and outgoing voyages from each trip.
- Monitor your boat (position, depth, wind, temperature, battery charge status, etc.) remotely.
- History: view trends.
- Alert monitoring: get notification on low voltage or low fuel remotely.
- Notification via email or PushOver.
### Cloud
The cloud advantage.
Hosted and fullymanaged options for PostgSail, designed for all your deployment and business needs. Register and try for free at https://iot.openplotter.cloud/.
### pre-deploy configuration
To get these running, copy `.env.example` and rename to `.env` then set the value accordinly.
Notice, that `PGRST_JWT_SECRET` must be at least 32 characters long.
`$ head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo ''`
### Deploy
By default there is no network set and the postgresql data are store in a docker volume.
You can update the default settings by editing `docker-compose.yml` to your need.
Then simply excecute:
```
$ docker-compose up
```
### PostgSail Configuration
Check and update your postgsail settings via SQL in the table `app_settings`:
```
select * from app_settings;
```
```
UPDATE app_settings
SET
value = 'new_value'
WHERE name = 'app.email_server';
```
### Ingest data
Next, to ingest data from signalk, you need to install [signalk-postgsail](https://github.com/xbgmsharp/signalk-postgsail) plugin on your signalk server instance.
Also, if you like, you can import saillogger data using the postgsail helpers, [postgsail-helpers](https://github.com/xbgmsharp/postgsail-helpers).
You might want to import your influxdb1 data as weel, [outflux](https://github.com/timescale/outflux).
Any taker on influxdb2 to PostgSail? It is definitly possible.
Last, if you like, you can import the sample data from Signalk NMEA Plaka by running the tests.
If everything goes well all tests pass sucessfully and you should recieve a few notifications by email or PushOver.
```
$ docker-compose up tests
```
### API Documentation
The OpenAPI description output depends on the permissions of the role that is contained in the JWT role claim.
API anonymous:
```
$ curl http://localhost:3000/
```
API user_role:
```
$ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_login_or_signup_fn'
```
API vessel_role:
```
$ curl http://localhost:3000/ -H 'Authorization: Bearer my_token_from_register_vessel_fn'
```
#### API main workflow
Check the [unit test sample](https://github.com/xbgmsharp/PostgSail/blob/main/tests/index.js).
### Docker dependencies
`docker-compose` is used to start environment dependencies. Dependencies consist of 2 containers:
- `timescaledb-postgis` alias `db`, PostgreSQL with TimescaleDB extension along with the PostGIS extension.
- `postgrest` alias `api`, Standalone web server that turns your PostgreSQL database directly into a RESTful API.
### Optional docker images
- [Grafana](https://hub.docker.com/r/grafana/grafana), visualize and monitor your data
- [pgAdmin](https://hub.docker.com/r/dpage/pgadmin4), web UI to monitor and manage multiple PostgreSQL
- [Swagger](https://hub.docker.com/r/swaggerapi/swagger-ui), web UI to visualize documentation from PostgREST
```
docker-compose -f docker-compose-optional.yml up
```
### Software reference
Out of the box iot platform using docker with the following software:
- [Signal K server, a Free and Open Source universal marine data exchange format](https://signalk.org)
- [PostgreSQL, open source object-relational database system](https://postgresql.org)
- [TimescaleDB, Time-series data extends PostgreSQL](https://www.timescale.com)
- [PostGIS, a spatial database extender for PostgreSQL object-relational database.](https://postgis.net/)
- [Grafana, open observability platform | Grafana Labs](https://grafana.com)
### Support
To get support, please create new [issue](https://github.com/xbgmsharp/PostgSail/issues).
There is more likely security flows and bugs.
### Contribution
I'm happy to accept Pull Requests!
Feel free to contribute.
### License
This script is free software, Apache License Version 2.0.

41
docker-compose.yml Normal file
View File

@@ -0,0 +1,41 @@
version: '3.9'
services:
db:
image: xbgmsharp/timescaledb-postgis
container_name: db
restart: unless-stopped
env_file: .env
environment:
- POSTGRES_DB=postgres
- TIMESCALEDB_TELEMETRY=off
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- $PWD/initdb:/docker-entrypoint-initdb.d
logging:
options:
max-size: 10m
api:
image: postgrest/postgrest
container_name: api
restart: unless-stopped
ports:
- "3000:3000"
env_file: .env
environment:
PGRST_DB_SCHEMA: api
PGRST_DB_ANON_ROLE: api_anonymous
PGRST_OPENAPI_SERVER_PROXY_URI: http://127.0.0.1:3000
PGRST_DB_PRE_REQUEST: public.check_jwt
network_mode: "host"
depends_on:
- db
logging:
options:
max-size: 10m
volumes:
data: {}

33
initdb/01signalk.sh Executable file
View File

@@ -0,0 +1,33 @@
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
echo "CUSTOMIZED OPTIONS"
echo $PGDATA
echo "${PGDATA}/postgresql.conf"
cat << 'EOF' >> ${PGDATA}/postgresql.conf
# Add settings for extensions here
shared_preload_libraries = 'timescaledb,pg_stat_statements,pg_cron'
timescaledb.telemetry_level=off
# pg_cron database
#cron.database_name = 'signalk'
# pg_cron connect via a unix domain socket
cron.host = '/var/run/postgresql/'
# monitoring https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING
track_io_timing = on
stats_temp_directory = '/tmp'
# Postgrest
# send logs where the collector can access them
#log_destination = 'stderr'
# collect stderr output to log files
#logging_collector = on
# save logs in pg_log/ under the pg data directory
#log_directory = 'pg_log'
# (optional) new log file per day
#log_filename = 'postgresql-%Y-%m-%d.log'
# log every kind of SQL statement
#log_statement = 'all'
EOF

896
initdb/02_1_signalk_api.sql Normal file
View File

@@ -0,0 +1,896 @@
---------------------------------------------------------------------------
-- PostSail => Postgres + TimescaleDB + PostGIS + PostgREST
--
-- Inspired from:
-- https://groups.google.com/g/signalk/c/W2H15ODCic4
--
-- Description:
-- Insert data into table metadata from API using PostgREST
-- Insert data into table metrics from API using PostgREST
-- TimescaleDB Hypertable to store signalk metrics
-- pgsql functions to generate logbook, stays, moorages
-- CRON functions to process logbook, stays, moorages
-- python functions for geo reverse and send notification via email and/or pushover
-- Views statistics, timelapse, monitoring, logs
-- Always store time in UTC
---------------------------------------------------------------------------
-- vessels signalk -(POST)-> metadata -> metadata_upsert -(trigger)-> metadata_upsert_fn (INSERT or UPDATE)
-- vessels signalk -(POST)-> metrics -> metrics -(trigger)-> metrics_fn new log,stay,moorage
---------------------------------------------------------------------------
-- Drop database
-- % docker exec -i timescaledb-postgis psql -Uusername -W postgres -c "drop database signalk;"
-- Import Schema
-- % cat signalk.sql | docker exec -i timescaledb-postgis psql -Uusername postgres
-- Export hypertable
-- % docker exec -i timescaledb-postgis psql -Uusername -W signalk -c "\COPY (SELECT * FROM api.metrics ORDER BY time ASC) TO '/var/lib/postgresql/data/metrics.csv' DELIMITER ',' CSV"
-- Export hypertable to gzip
-- # docker exec -i timescaledb-postgis psql -Uusername -W signalk -c "\COPY (SELECT * FROM api.metrics ORDER BY time ASC) TO PROGRAM 'gzip > /var/lib/postgresql/data/metrics.csv.gz' CSV HEADER;"
DO $$
BEGIN
RAISE WARNING '
_________.__ .__ ____ __.
/ _____/|__| ____ ____ _____ | | | |/ _|
\_____ \ | |/ ___\ / \\__ \ | | | <
/ \| / /_/ > | \/ __ \| |_| | \
/_______ /|__\___ /|___| (____ /____/____|__ \
\/ /_____/ \/ \/ \/
%', now();
END $$;
select version();
-- Database
CREATE DATABASE signalk;
-- connext to the DB
\c signalk
-- Schema
CREATE SCHEMA IF NOT EXISTS api;
COMMENT ON SCHEMA api IS 'api schema expose to postgrest';
-- Revoke default privileges to all public functions
ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
-- Extensions
CREATE EXTENSION IF NOT EXISTS timescaledb; -- provides time series functions for PostgreSQL
-- CREATE EXTENSION IF NOT EXISTS timescaledb_toolkit; -- provides time series functions for PostgreSQL
CREATE EXTENSION IF NOT EXISTS postgis; -- adds support for geographic objects to the PostgreSQL object-relational database
CREATE EXTENSION IF NOT EXISTS plpgsql; -- PL/pgSQL procedural language
CREATE EXTENSION IF NOT EXISTS plpython3u; -- implements PL/Python based on the Python 3 language variant.
CREATE EXTENSION IF NOT EXISTS jsonb_plpython3u CASCADE; -- tranform jsonb to python json type.
CREATE EXTENSION IF NOT EXISTS pg_stat_statements; -- provides a means for tracking planning and execution statistics of all SQL statements executed
-- Trust plpython3u language by default
UPDATE pg_language SET lanpltrusted = true WHERE lanname = 'plpython3u';
---------------------------------------------------------------------------
-- Tables
--
-- Metrics from signalk
CREATE TABLE IF NOT EXISTS api.metrics (
time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
client_id VARCHAR(255) NOT NULL,
latitude DOUBLE PRECISION NULL,
longitude DOUBLE PRECISION NULL,
speedOverGround DOUBLE PRECISION NULL,
courseOverGroundTrue DOUBLE PRECISION NULL,
windSpeedApparent DOUBLE PRECISION NULL,
angleSpeedApparent DOUBLE PRECISION NULL,
status VARCHAR(100) NULL,
metrics jsonb NULL
);
-- Description
COMMENT ON TABLE
api.metrics
IS 'Stores metrics from vessel';
-- Index todo!
CREATE INDEX ON api.metrics (client_id, time DESC);
CREATE INDEX ON api.metrics (status, time DESC);
-- json index??
CREATE INDEX ON api.metrics using GIN (metrics);
-- timescaledb hypertable
SELECT create_hypertable('api.metrics', 'time');
---------------------------------------------------------------------------
-- Metadata from signalk
CREATE TABLE IF NOT EXISTS api.metadata(
id SERIAL PRIMARY KEY,
name VARCHAR(150) NULL,
mmsi VARCHAR(10) NULL,
client_id VARCHAR(255) UNIQUE NOT NULL,
length DOUBLE PRECISION NULL,
beam DOUBLE PRECISION NULL,
height DOUBLE PRECISION NULL,
ship_type VARCHAR(255) NULL,
plugin_version VARCHAR(10) NOT NULL,
signalk_version VARCHAR(10) NOT NULL,
time TIMESTAMP WITHOUT TIME ZONE NOT NULL, -- last_update
active BOOLEAN DEFAULT True -- monitor online/offline
);
-- Description
COMMENT ON TABLE
api.metadata
IS 'Stores metadata from vessel';
-- Index todo!
CREATE INDEX metadata_client_id_idx ON api.metadata (client_id);
---------------------------------------------------------------------------
-- Logbook
-- todo add clientid ref
-- todo add cosumption fuel?
-- todo add engine hour?
-- todo add geom object http://epsg.io/4326 EPSG:4326 Unit: degres
-- todo add geog object http://epsg.io/3857 EPSG:3857 Unit: meters
-- https://postgis.net/workshops/postgis-intro/geography.html#using-geography
-- https://medium.com/coord/postgis-performance-showdown-geometry-vs-geography-ec99967da4f0
-- virtual logbook by boat by client_id impossible?
-- https://www.postgresql.org/docs/current/ddl-partitioning.html
-- Issue:
-- https://www.reddit.com/r/PostgreSQL/comments/di5mbr/postgresql_12_foreign_keys_and_partitioned_tables/f3tsoop/
CREATE TABLE IF NOT EXISTS api.logbook(
id SERIAL PRIMARY KEY,
client_id VARCHAR(255) NOT NULL REFERENCES api.metadata(client_id) ON DELETE RESTRICT,
-- client_id VARCHAR(255) NOT NULL,
active BOOLEAN DEFAULT false,
name VARCHAR(255),
_from VARCHAR(255),
_from_lat DOUBLE PRECISION NULL,
_from_lng DOUBLE PRECISION NULL,
_to VARCHAR(255),
_to_lat DOUBLE PRECISION NULL,
_to_lng DOUBLE PRECISION NULL,
--track_geom Geometry(LINESTRING)
track_geom geometry(LINESTRING,4326) NULL,
track_geog geography(LINESTRING) NULL,
_from_time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
_to_time TIMESTAMP WITHOUT TIME ZONE NULL,
distance NUMERIC, -- meters?
duration INTERVAL, -- duration in days and hours?
avg_speed DOUBLE PRECISION NULL,
max_speed DOUBLE PRECISION NULL,
max_wind_speed DOUBLE PRECISION NULL,
notes TEXT NULL
);
-- Description
COMMENT ON TABLE
api.logbook
IS 'Stores generated logbook';
COMMENT ON COLUMN api.logbook.distance IS 'in NM';
-- Index todo!
CREATE INDEX logbook_client_id_idx ON api.logbook (client_id);
CREATE INDEX ON api.logbook USING GIST ( track_geom );
COMMENT ON COLUMN api.logbook.track_geom IS 'postgis geometry type EPSG:4326 Unit: degres';
CREATE INDEX ON api.logbook USING GIST ( track_geog );
COMMENT ON COLUMN api.logbook.track_geog IS 'postgis geography type default SRID 4326 Unit: degres';
-- Otherwise -- ERROR: Only lon/lat coordinate systems are supported in geography.
---------------------------------------------------------------------------
-- Stays
-- todo add clientid ref
-- todo add FOREIGN KEY?
-- virtual logbook by boat?
CREATE TABLE IF NOT EXISTS api.stays(
id SERIAL PRIMARY KEY,
client_id VARCHAR(255) NOT NULL REFERENCES api.metadata(client_id) ON DELETE RESTRICT,
-- client_id VARCHAR(255) NOT NULL,
active BOOLEAN DEFAULT false,
name VARCHAR(255),
latitude DOUBLE PRECISION NULL,
longitude DOUBLE PRECISION NULL,
geog GEOGRAPHY(POINT) NULL,
arrived TIMESTAMP WITHOUT TIME ZONE NOT NULL,
departed TIMESTAMP WITHOUT TIME ZONE,
duration INTERVAL, -- duration in days and hours?
stay_code INT DEFAULT 1, -- REFERENCES api.stays_at(stay_code),
notes TEXT NULL
);
-- Description
COMMENT ON TABLE
api.stays
IS 'Stores generated stays';
-- Index
CREATE INDEX stays_client_id_idx ON api.stays (client_id);
CREATE INDEX ON api.stays USING GIST ( geog );
COMMENT ON COLUMN api.stays.geog IS 'postgis geography type default SRID 4326 Unit: degres';
-- With other SRID ERROR: Only lon/lat coordinate systems are supported in geography.
---------------------------------------------------------------------------
-- Moorages
-- todo add clientid ref
-- virtual logbook by boat?
CREATE TABLE IF NOT EXISTS api.moorages(
id SERIAL PRIMARY KEY,
client_id VARCHAR(255) NOT NULL REFERENCES api.metadata(client_id) ON DELETE RESTRICT,
-- client_id VARCHAR(255) NOT NULL,
name VARCHAR(255),
country VARCHAR(255), -- todo need to update reverse_geocode_py_fn
stay_id INT NOT NULL, -- needed?
stay_code INT DEFAULT 1, -- needed? REFERENCES api.stays_at(stay_code)
stay_duration INTERVAL NULL,
reference_count INT DEFAULT 1,
latitude DOUBLE PRECISION NULL,
longitude DOUBLE PRECISION NULL,
geog GEOGRAPHY(POINT) NULL,
home_flag BOOLEAN DEFAULT false,
notes TEXT NULL
);
-- Description
COMMENT ON TABLE
api.moorages
IS 'Stores generated moorages';
-- Index
CREATE INDEX moorages_client_id_idx ON api.moorages (client_id);
CREATE INDEX ON api.moorages USING GIST ( geog );
COMMENT ON COLUMN api.moorages.geog IS 'postgis geography type default SRID 4326 Unit: degres';
-- With other SRID ERROR: Only lon/lat coordinate systems are supported in geography.
---------------------------------------------------------------------------
-- Stay Type
CREATE TABLE IF NOT EXISTS api.stays_at(
stay_code INTEGER,
description TEXT
);
-- Description
COMMENT ON TABLE api.stays_at IS 'Stay Type';
-- Insert default possible values
INSERT INTO api.stays_at(stay_code, description) VALUES
(1, 'Unknow'),
(2, 'Anchor'),
(3, 'Mooring Buoy'),
(4, 'Dock');
---------------------------------------------------------------------------
-- Trigger Functions Metadata table
--
-- UPSERT - Insert vs Update for Metadata
DROP FUNCTION IF EXISTS metadata_upsert_trigger_fn;
CREATE FUNCTION metadata_upsert_trigger_fn() RETURNS trigger AS $metadata_upsert$
DECLARE
metadata_id integer;
metadata_active boolean;
BEGIN
-- UPSERT - Insert vs Update for Metadata
RAISE NOTICE 'metadata_upsert_trigger_fn';
SELECT m.id,m.active INTO metadata_id,metadata_active
FROM api.metadata m
WHERE (m.mmsi IS NOT NULL AND m.mmsi = NEW.mmsi)
OR (m.client_id IS NOT NULL AND m.client_id = NEW.client_id);
RAISE NOTICE 'metadata_id %', metadata_id;
IF metadata_id IS NOT NULL THEN
-- send notifitacion if boat is back online
IF metadata_active is False THEN
-- Add monitor online entry to process queue for later notification
INSERT INTO process_queue (channel, payload, stored)
VALUES ('monitoring_online', metadata_id, now());
END IF;
-- Update vessel metadata
UPDATE api.metadata
SET
name = NEW.name,
mmsi = NEW.mmsi,
client_id = NEW.client_id,
length = NEW.length,
beam = NEW.beam,
height = NEW.height,
ship_type = NEW.ship_type,
plugin_version = NEW.plugin_version,
signalk_version = NEW.signalk_version,
time = NEW.time,
active = true
WHERE id = metadata_id;
RETURN NULL; -- Ignore insert
ELSE
-- Insert new vessel metadata
RETURN NEW; -- Insert new vessel metadata
END IF;
END;
$metadata_upsert$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.metadata_upsert_trigger_fn
IS 'process metadata from vessel, upsert';
-- Metadata notification for new vessel after insert
DROP FUNCTION IF EXISTS metadata_notification_trigger_fn;
CREATE FUNCTION metadata_notification_trigger_fn() RETURNS trigger AS $metadata_notification$
DECLARE
BEGIN
RAISE NOTICE 'metadata_notification_trigger_fn';
INSERT INTO process_queue (channel, payload, stored)
VALUES ('monitoring_online', NEW.id, now());
RETURN NULL;
END;
$metadata_notification$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.metadata_notification_trigger_fn
IS 'process metadata notification from vessel, monitoring_online';
-- Metadata trigger BEFORE INSERT
CREATE TRIGGER metadata_upsert_trigger BEFORE INSERT ON api.metadata
FOR EACH ROW EXECUTE FUNCTION metadata_upsert_trigger_fn();
-- Description
COMMENT ON TRIGGER
metadata_upsert_trigger ON api.metadata
IS 'BEFORE INSERT ON api.metadata run function metadata_upsert_trigger_fn';
-- Metadata trigger AFTER INSERT
CREATE TRIGGER metadata_notification_trigger AFTER INSERT ON api.metadata
FOR EACH ROW EXECUTE FUNCTION metadata_notification_trigger_fn();
-- Description
COMMENT ON TRIGGER
metadata_notification_trigger ON api.metadata
IS 'AFTER INSERT ON api.metadata run function metadata_update_trigger_fn for notification on new vessel';
---------------------------------------------------------------------------
-- Trigger Functions metrics table
--
-- Create a logbook or stay entry base on the vessel state, eg: navigation.state
-- https://github.com/meri-imperiumi/signalk-autostate
DROP FUNCTION IF EXISTS metrics_trigger_fn;
CREATE FUNCTION metrics_trigger_fn() RETURNS trigger AS $metrics$
DECLARE
previous_status varchar;
previous_time TIMESTAMP WITHOUT TIME ZONE;
stay_code integer;
logbook_id integer;
stay_id integer;
BEGIN
RAISE NOTICE 'metrics_trigger_fn';
-- todo: Check we have the boat metadata?
-- Do we have a log in progress?
-- Do we have a stay in progress?
-- Fetch the latest entry to compare status against the new status to be insert
SELECT coalesce(m.status, 'moored'), m.time INTO previous_status, previous_time
FROM api.metrics m
WHERE m.client_id IS NOT NULL
AND m.client_id = NEW.client_id
ORDER BY m.time DESC LIMIT 1;
RAISE NOTICE 'Metrics Status, New:[%] Previous:[%]', NEW.status, previous_status;
IF NEW.status IS NULL THEN
RAISE WARNING 'Invalid new status [%], update to default moored', NEW.status;
NEW.status := 'moored';
END IF;
IF previous_status IS NULL THEN
RAISE WARNING 'Invalid previous status [%], update to default moored', previous_status;
previous_status := 'moored';
-- Add new stay as no previous entry exist
INSERT INTO api.stays
(client_id, active, arrived, latitude, longitude, stay_code)
VALUES (NEW.client_id, true, NEW.time, NEW.latitude, NEW.longitude, stay_code)
RETURNING id INTO stay_id;
-- Add stay entry to process queue for further processing
INSERT INTO process_queue (channel, payload, stored) values ('new_stay', stay_id, now());
RAISE WARNING 'Insert first stay as no previous metrics exist, stay_id %', stay_id;
END IF;
IF previous_time = NEW.time THEN
-- Ignore entry if same time
RAISE WARNING 'Ignoring metric, duplicate time [%] = [%]', previous_time, NEW.time;
RETURN NULL;
END IF;
--
-- Check the state and if any previous/current entry
IF previous_status <> NEW.status AND (NEW.status = 'sailing' OR NEW.status = 'motoring') THEN
-- Start new log
RAISE WARNING 'Start new log, New:[%] Previous:[%]', NEW.status, previous_status;
RAISE NOTICE 'Inserting new trip [%]', NEW.status;
INSERT INTO api.logbook
(client_id, active, _from_time, _from_lat, _from_lng)
VALUES (NEW.client_id, true, NEW.time, NEW.latitude, NEW.longitude);
-- End current stay
-- Fetch stay_id by client_id
SELECT id INTO stay_id
FROM api.stays s
WHERE s.client_id IS NOT NULL
AND s.client_id = NEW.client_id
AND active IS true
LIMIT 1;
RAISE NOTICE 'Updating stay status [%] [%] [%]', stay_id, NEW.status, NEW.time;
IF stay_id IS NOT NULL THEN
UPDATE api.stays
SET
active = false,
departed = NEW.time
WHERE id = stay_id;
-- Add moorage entry to process queue for further processing
INSERT INTO process_queue (channel, payload, stored) values ('new_moorage', stay_id, now());
ELSE
RAISE WARNING 'Invalid stay_id [%] [%]', stay_id, NEW.time;
END IF;
ELSIF previous_status <> NEW.status AND (NEW.status = 'moored' OR NEW.status = 'anchored') THEN
-- Start new stays
RAISE WARNING 'Start new stay, New:[%] Previous:[%]', NEW.status, previous_status;
RAISE NOTICE 'Inserting new stay [%]', NEW.status;
-- if metric status is anchored set stay_code accordingly
stay_code = 1;
IF NEW.status = 'anchored' THEN
stay_code = 2;
END IF;
-- Add new stay
INSERT INTO api.stays
(client_id, active, arrived, latitude, longitude, stay_code)
VALUES (NEW.client_id, true, NEW.time, NEW.latitude, NEW.longitude, stay_code)
RETURNING id INTO stay_id;
-- Add stay entry to process queue for further processing
INSERT INTO process_queue (channel, payload, stored) values ('new_stay', stay_id, now());
-- End current log/trip
-- Fetch logbook_id by client_id
SELECT id INTO logbook_id
FROM api.logbook l
WHERE l.client_id IS NOT NULL
AND l.client_id = NEW.client_id
AND active IS true
LIMIT 1;
IF logbook_id IS NOT NULL THEN
-- todo check on time start vs end
RAISE NOTICE 'Updating trip status [%] [%] [%]', logbook_id, NEW.status, NEW.time;
UPDATE api.logbook
SET
active = false,
_to_time = NEW.time,
_to_lat = NEW.latitude,
_to_lng = NEW.longitude
WHERE id = logbook_id;
-- Add logbook entry to process queue for later processing
INSERT INTO process_queue (channel, payload, stored) values ('new_logbook', logbook_id, now());
ELSE
RAISE WARNING 'Invalid logbook_id [%] [%]', logbook_id, NEW.time;
END IF;
END IF;
RETURN NEW; -- Finally insert the actual new metric
END;
$metrics$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.metrics_trigger_fn
IS 'process metrics from vessel, generate new_logbook and new_stay';
--
-- Triggers logbook update on metrics insert
CREATE TRIGGER metrics_trigger BEFORE INSERT ON api.metrics
FOR EACH ROW EXECUTE FUNCTION metrics_trigger_fn();
-- Description
COMMENT ON TRIGGER
metrics_trigger ON api.metrics
IS 'BEFORE INSERT ON api.metrics run function metrics_trigger_fn';
---------------------------------------------------------------------------
-- Functions API schema
-- Export a log entry to geojson
DROP FUNCTION IF EXISTS api.export_logbook_geojson_point_fn;
CREATE OR REPLACE FUNCTION api.export_logbook_geojson_point_fn(IN _id INTEGER, OUT geojson JSON) RETURNS JSON AS $export_logbook_geojson_point$
DECLARE
logbook_rec record;
BEGIN
-- If _id is is not NULL and > 0
SELECT * INTO logbook_rec
FROM api.logbook WHERE id = _id;
WITH log AS (
SELECT m.time as time, m.latitude as lat, m.longitude as lng, m.courseOverGroundTrue as cog
FROM api.metrics m
WHERE m.latitude IS NOT null
AND m.longitude IS NOT null
AND m.time >= logbook_rec._from_time::timestamp without time zone
AND m.time <= logbook_rec._to_time::timestamp without time zone
GROUP by m.time,m.latitude,m.longitude,m.courseOverGroundTrue
ORDER BY m.time ASC)
SELECT json_build_object(
'type', 'FeatureCollection',
'crs', json_build_object(
'type', 'name',
'properties', json_build_object(
'name', 'EPSG:4326'
)
),
'features', json_agg(
json_build_object(
'type', 'Feature',
-- 'id', {id}, -- the GeoJson spec includes an 'id' field, but it is optional, replace {id} with your id field
'geometry', ST_AsGeoJSON(st_makepoint(lng,lat))::json,
'properties', json_build_object(
-- list of fields
'field1', time,
'field2', cog
)
)
)
) INTO geojson
FROM log;
END;
$export_logbook_geojson_point$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.export_logbook_geojson_point_fn
IS 'Export a log entry to geojson feature point with Time and courseOverGroundTrue properties';
-- Export a log entry to geojson
DROP FUNCTION IF EXISTS api.export_logbook_geojson_linestring_fn;
CREATE FUNCTION api.export_logbook_geojson_linestring_fn(IN _id INTEGER) RETURNS JSON AS $export_logbook_geojson_linestring$
DECLARE
geojson json;
BEGIN
-- If _id is is not NULL and > 0
SELECT ST_AsGeoJSON(l.track_geom) INTO geojson
FROM api.logbook l
WHERE l.id = _id;
RETURN geojson;
END;
$export_logbook_geojson_linestring$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.export_logbook_geojson_linestring_fn
IS 'Export a log entry to geojson feature linestring';
-- Find all log from and to moorage geopoint within 100m
DROP FUNCTION IF EXISTS api.find_log_from_moorage_fn;
CREATE FUNCTION api.find_log_from_moorage_fn(IN _id INTEGER) RETURNS void AS $find_log_from_moorage$
DECLARE
moorage_rec record;
logbook_rec record;
BEGIN
-- If _id is is not NULL and > 0
SELECT * INTO moorage_rec
FROM api.moorages m
WHERE m.id = _id;
-- find all log from and to moorage geopoint within 100m
--RETURN QUERY
SELECT id,name,_from,_to,_from_time,_to_time,distance,duration
FROM api.logbook
WHERE ST_DWithin(
Geography(ST_MakePoint(_from_lng, _from_lat)),
moorage_rec.geog,
100 -- in meters ?
)
OR ST_DWithin(
Geography(ST_MakePoint(_to_lng, _to_lat)),
moorage_rec.geog,
100 -- in meters ?
)
ORDER BY _from_time DESC;
END;
$find_log_from_moorage$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.find_log_from_moorage_fn
IS 'Find all log from and to moorage geopoint within 100m';
-- Find all stay within 100m of moorage geopoint
DROP FUNCTION IF EXISTS api.find_stay_from_moorage_fn;
CREATE FUNCTION api.find_stay_from_moorage_fn(IN _id INTEGER) RETURNS void AS $find_stay_from_moorage$
DECLARE
moorage_rec record;
stay_rec record;
BEGIN
-- If _id is is not NULL and > 0
SELECT * INTO moorage_rec
FROM api.moorages m
WHERE m.id = _id;
-- find all log from and to moorage geopoint within 100m
--RETURN QUERY
SELECT s.id,s.arrived,s.departed,s.duration,sa.description
FROM api.stays s, api.stays_at sa
WHERE ST_DWithin(
s.geog,
moorage_rec.geog,
100 -- in meters ?
)
AND departed IS NOT NULL
AND s.name IS NOT NULL
AND s.stay_code = sa.stay_code
ORDER BY s.arrived DESC;
END;
$find_stay_from_moorage$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
api.find_stay_from_moorage_fn
IS 'Find all stay within 100m of moorage geopoint';
---------------------------------------------------------------------------
-- Views
--
CREATE VIEW first_metric AS
SELECT *
FROM api.metrics
ORDER BY time ASC LIMIT 1;
CREATE VIEW last_metric AS
SELECT *
FROM api.metrics
ORDER BY time DESC LIMIT 1;
CREATE VIEW trip_in_progress AS
SELECT *
FROM api.logbook
WHERE active IS true;
CREATE VIEW stay_in_progress AS
SELECT *
FROM api.stays
WHERE active IS true;
-- TODO: Use materialized views instead as it is not live data
-- Logs web view
DROP VIEW IF EXISTS api.logs_view;
CREATE VIEW api.logs_view AS
SELECT id,name,_from,_to,_from_time,_to_time,distance,duration
FROM api.logbook
WHERE _to_time IS NOT NULL
ORDER BY _from_time DESC;
-- Description
COMMENT ON VIEW
api.logs_view
IS 'Logs web view';
-- Stays web view
-- TODO group by month
DROP VIEW IF EXISTS api.stays_view;
CREATE VIEW api.stays_view AS
SELECT
concat(
extract(DAYS FROM (s.departed-s.arrived)::interval),
' days',
--DATE_TRUNC('day', s.departed-s.arrived),
' stay at ',
s.name,
' in ',
RTRIM(TO_CHAR(s.departed, 'Month')),
' ',
TO_CHAR(s.departed, 'YYYY')
) as Name,
s.name AS Moorage,
s.arrived AS Arrived,
s.departed AS Departed,
sa.description AS "Stayed at",
(s.departed-s.arrived) AS Duration
FROM api.stays s, api.stays_at sa
WHERE departed is not null
AND s.name is not null
AND s.stay_code = sa.stay_code
ORDER BY s.arrived DESC;
-- Description
COMMENT ON VIEW
api.stays_view
IS 'Stays web view';
-- Moorages web view
-- TODO, this is wrong using distinct (m.name) should be using postgis geog feature
--DROP VIEW IF EXISTS api.moorages_view_old;
--CREATE VIEW api.moorages_view_old AS
-- SELECT
-- m.name AS Moorage,
-- sa.description AS "Default Stay",
-- sum((m.departed-m.arrived)) OVER (PARTITION by m.name) AS "Total Stay",
-- count(m.departed) OVER (PARTITION by m.name) AS "Arrivals & Departures"
-- FROM api.moorages m, api.stays_at sa
-- WHERE departed is not null
-- AND m.name is not null
-- AND m.stay_code = sa.stay_code
-- GROUP BY m.name,sa.description,m.departed,m.arrived
-- ORDER BY 4 DESC;
-- the good way?
DROP VIEW IF EXISTS api.moorages_view;
CREATE OR REPLACE VIEW api.moorages_view AS
SELECT
m.name AS Moorage,
sa.description AS "Default Stay",
EXTRACT(DAY FROM justify_hours ( m.stay_duration )) AS "Total Stay",
m.reference_count AS "Arrivals & Departures",
m.geog
-- m.stay_duration,
-- justify_hours ( m.stay_duration )
FROM api.moorages m, api.stays_at sa
WHERE m.name is not null
AND m.stay_code = sa.stay_code
GROUP BY m.name,sa.description,m.stay_duration,m.reference_count,m.geog
-- ORDER BY 4 DESC;
ORDER BY m.reference_count DESC;
-- Description
COMMENT ON VIEW
api.moorages_view
IS 'Moorages web view';
-- All moorage in 100 meters from the start of a logbook.
-- ST_DistanceSphere Returns minimum distance in meters between two lon/lat points.
--SELECT
-- m.name, ST_MakePoint(m._lng,m._lat),
-- l._from, ST_MakePoint(l._from_lng,l._from_lat),
-- ST_DistanceSphere(ST_MakePoint(m._lng,m._lat), ST_MakePoint(l._from_lng,l._from_lat))
-- FROM api.moorages m , api.logbook l
-- WHERE ST_DistanceSphere(ST_MakePoint(m._lng,m._lat), ST_MakePoint(l._from_lng,l._from_lat)) <= 100;
-- Stats web view
-- TODO....
-- first time entry from metrics
----> select * from api.metrics m ORDER BY m.time desc limit 1
-- last time entry from metrics
----> select * from api.metrics m ORDER BY m.time asc limit 1
-- max speed from logbook
-- max wind speed from logbook
----> select max(l.max_speed) as max_speed, max(l.max_wind_speed) as max_wind_speed from api.logbook l;
-- Total Distance from logbook
----> select sum(l.distance) as "Total Distance" from api.logbook l;
-- Total Time Underway from logbook
----> select sum(l.duration) as "Total Time Underway" from api.logbook l;
-- Longest Nonstop Sail from logbook, eg longest trip duration and distance
----> select max(l.duration),max(l.distance) from api.logbook l;
CREATE VIEW api.stats_logs_view AS -- todo
WITH
meta AS (
SELECT m.name FROM api.metadata m ),
last_metric AS (
SELECT m.time FROM api.metrics m ORDER BY m.time DESC limit 1),
first_metric AS (
SELECT m.time FROM api.metrics m ORDER BY m.time ASC limit 1),
logbook AS (
SELECT
count(*) AS "Number of Log Entries",
max(l.max_speed) AS "Max Speed",
max(l.max_wind_speed) AS "Max Wind Speed",
sum(l.distance) AS "Total Distance",
sum(l.duration) AS "Total Time Underway",
concat( max(l.distance), ' NM, ', max(l.duration), ' hours') AS "Longest Nonstop Sail"
FROM api.logbook l)
SELECT
m.name as Name,
fm.time AS first,
lm.time AS last,
l.*
FROM first_metric fm, last_metric lm, logbook l, meta m;
-- Home Ports / Unique Moorages
----> select count(*) as "Home Ports" from api.moorages m where home_flag is true;
-- Unique Moorages
----> select count(*) as "Home Ports" from api.moorages m;
-- Time Spent at Home Port(s)
----> select sum(m.stay_duration) as "Time Spent at Home Port(s)" from api.moorages m where home_flag is true;
-- OR
----> select m.stay_duration as "Time Spent at Home Port(s)" from api.moorages m where home_flag is true;
-- Time Spent Away
----> select sum(m.stay_duration) as "Time Spent Away" from api.moorages m where home_flag is false;
-- Time Spent Away order by, group by stay_code (Dock, Anchor, Mooring Buoys, Unclassified)
----> select sa.description,sum(m.stay_duration) as "Time Spent Away" from api.moorages m, api.stays_at sa where home_flag is false AND m.stay_code = sa.stay_code group by m.stay_code,sa.description order by m.stay_code;
CREATE VIEW api.stats_moorages_view AS -- todo
select *
from api.moorages;
--CREATE VIEW api.stats_view AS -- todo
-- WITH
-- logs AS (
-- SELECT * FROM api.stats_logs_view ),
-- moorages AS (
-- SELECT * FROM api.stats_moorages_view)
-- SELECT
-- l.*,
-- m.*
-- FROM logs l, moorages m;
-- global timelapse
-- TODO
CREATE VIEW timelapse AS -- todo
SELECT latitude, longitude from api.metrics;
-- View main monitoring for grafana
-- LAST Monitoring data from json!
CREATE VIEW api.monitoring AS
SELECT
time AS "time",
metrics-> 'environment.water.temperature' AS waterTemperature,
metrics-> 'environment.inside.temperature' AS insideTemperature,
metrics-> 'environment.outside.temperature' AS outsideTemperature,
metrics-> 'environment.wind.speedOverGround' AS windSpeedOverGround,
metrics-> 'environment.wind.directionGround' AS windDirectionGround,
metrics-> 'environment.inside.humidity' AS insideHumidity,
metrics-> 'environment.outside.humidity' AS outsideHumidity,
metrics-> 'environment.outside.pressure' AS outsidePressure,
metrics-> 'environment.inside.pressure' AS insidePressure
FROM api.metrics m
ORDER BY time DESC LIMIT 1;
CREATE VIEW api.monitoring_humidity AS
SELECT
time AS "time",
metrics-> 'environment.inside.humidity' AS insideHumidity,
metrics-> 'environment.outside.humidity' AS outsideHumidity
FROM api.metrics m
ORDER BY time DESC LIMIT 1;
-- View System RPI monitoring for grafana
-- View Electric monitoring for grafana
-- View main monitoring for grafana
-- LAST Monitoring data from json!
CREATE VIEW api.monitorin_temperatures AS
SELECT
time AS "time",
metrics-> 'environment.water.temperature' AS waterTemperature,
metrics-> 'environment.inside.temperature' AS insideTemperature,
metrics-> 'environment.outside.temperature' AS outsideTemperature
FROM api.metrics m
ORDER BY time DESC LIMIT 1;
-- json key regexp
-- https://stackoverflow.com/questions/38204467/selecting-for-a-jsonb-array-contains-regex-match
-- Last voltage data from json!
CREATE VIEW api.voltage AS
SELECT
time AS "time",
cast(metrics-> 'electrical.batteries.AUX2.voltage' AS numeric) AS AUX2,
cast(metrics-> 'electrical.batteries.House.voltage' AS numeric) AS House,
cast(metrics-> 'environment.rpi.pijuice.gpioVoltage' AS numeric) AS gpioVoltage,
cast(metrics-> 'electrical.batteries.Seatalk.voltage' AS numeric) AS SeatalkVoltage,
cast(metrics-> 'electrical.batteries.Starter.voltage' AS numeric) AS StarterVoltage,
cast(metrics-> 'environment.rpi.pijuice.batteryVoltage' AS numeric) AS RPIBatteryVoltage,
cast(metrics-> 'electrical.batteries.victronDevice.voltage' AS numeric) AS victronDeviceVoltage
FROM api.metrics m
ORDER BY time DESC LIMIT 1;
---------------------------------------------------------------------------
-- API helper functions
--
DROP FUNCTION IF EXISTS api.export_logbook_gpx_py_fn;
CREATE OR REPLACE FUNCTION api.export_logbook_gpx_py_fn(IN _id INTEGER) RETURNS XML
AS $export_logbook_gpx_py$
import uuid
# BEGIN GPX XML format
gpx_data = f"""<?xml version="1.0"?>
<gpx version="1.1" creator="PostgSAIL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.topografix.com/GPX/1/1" xmlns:gpxx="http://www.garmin.com/xmlschemas/GpxExtensions/v3" xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd" xmlns:opencpn="http://www.opencpn.org">
<trk>
<link href="https://openplotter.cloud/log/{_id}">
<text>openplotter trip log todo</text>
</link>
<extensions>
<opencpn:guid>{uuid.uuid4()}</opencpn:guid>
<opencpn:viz>1</opencpn:viz>
<opencpn:start>{mytrack[0]['time']}</opencpn:start>
<opencpn:end>{mytrack[-1]['time']}</opencpn:end>
</extensions>
<trkseg>\n""";
##print(gpx_data)
# LOOP through log entry
for entry in mytrack:
##print(entry['time'])
gpx_data += f""" <trkpt lat="{entry['lat']}" lon="{entry['lng']}">
<time>{entry['time']}</time>
</trkpt>\n""";
# END GPX XML format
gpx_data += """ </trkseg>
</trk>
</gpx>""";
return gpx_data
$export_logbook_gpx_py$ LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
api.export_logbook_gpx_py_fn
IS 'TODO, Export a log entry to GPX XML format using plpython3u';
--DROP FUNCTION IF EXISTS api.export_logbook_csv_fn;
--CREATE OR REPLACE FUNCTION api.export_logbook_csv_fn(IN _id INTEGER) RETURNS void
--AS $export_logbook_csv$
-- TODO
--$export_logbook_csv$ language plpgsql;
-- Description
--COMMENT ON FUNCTION
-- api.export_logbook_csv_fn
-- IS 'TODO, ...';

View File

@@ -0,0 +1,270 @@
---------------------------------------------------------------------------
-- cron job function helpers on public schema
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
-- Check for new logbook pending update
CREATE FUNCTION cron_process_new_logbook_fn() RETURNS void AS $$
declare
process_rec record;
begin
-- Check for new logbook pending update
RAISE NOTICE 'cron_process_new_logbook_fn';
FOR process_rec in
SELECT * FROM process_queue
WHERE channel = 'new_logbook' AND processed IS NULL
ORDER BY stored ASC
LOOP
RAISE NOTICE '-> cron_process_new_logbook_fn [%]', process_rec.payload;
-- update logbook
PERFORM process_logbook_queue_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_logbook_fn
IS 'init by pg_cron to check for new logbook pending update, if so perform process_logbook_queue_fn';
-- Check for new stay pending update
CREATE FUNCTION cron_process_new_stay_fn() RETURNS void AS $$
declare
process_rec record;
begin
-- Check for new stay pending update
RAISE NOTICE 'cron_process_new_stay_fn';
FOR process_rec in
SELECT * FROM process_queue
WHERE channel = 'new_stay' AND processed IS NULL
ORDER BY stored ASC
LOOP
RAISE NOTICE '-> cron_process_new_stay_fn [%]', process_rec.payload;
-- update stay
PERFORM process_stay_queue_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_stay_fn
IS 'init by pg_cron to check for new stay pending update, if so perform process_stay_queue_fn';
-- Check for new moorage pending update
DROP FUNCTION IF EXISTS cron_process_new_moorage_fn;
CREATE OR REPLACE FUNCTION cron_process_new_moorage_fn() RETURNS void AS $$
declare
process_rec record;
begin
-- Check for new moorage pending update
RAISE NOTICE 'cron_process_new_moorage_fn';
FOR process_rec in
SELECT * FROM process_queue
WHERE channel = 'new_moorage' AND processed IS NULL
ORDER BY stored ASC
LOOP
RAISE NOTICE '-> cron_process_new_moorage_fn [%]', process_rec.payload;
-- update moorage
PERFORM process_moorage_queue_fn(process_rec.payload::INTEGER);
-- update process_queue table , processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_moorage_fn
IS 'init by pg_cron to check for new moorage pending update, if so perform process_moorage_queue_fn';
-- CRON Monitor offline pending notification
create function cron_process_monitor_offline_fn() RETURNS void AS $$
declare
metadata_rec record;
process_id integer;
user_settings jsonb;
app_settings jsonb;
begin
-- Check metadata last_update > 1h + cron_time(10m)
RAISE NOTICE 'cron_process_monitor_offline_fn';
FOR metadata_rec in
SELECT
*,
NOW() AT TIME ZONE 'UTC' as now,
NOW() AT TIME ZONE 'UTC' - INTERVAL '70 MINUTES' as interval
FROM api.metadata m
WHERE
m.time < NOW() AT TIME ZONE 'UTC' - INTERVAL '70 MINUTES'
AND active = True
ORDER BY m.time desc
LOOP
RAISE NOTICE '-> cron_process_monitor_offline_fn metadata_id [%]', metadata_rec.id;
-- update api.metadata table, set active to bool false
UPDATE api.metadata
SET
active = False
WHERE id = metadata_rec.id;
RAISE NOTICE '-> updated api.metadata table to inactive for [%]', metadata_rec.id;
-- Gather email and pushover app settings
app_settings = get_app_settings_fn();
-- Gather user settings
user_settings := get_user_settings_from_metadata_fn(metadata_rec.id::INTEGER);
--user_settings := get_user_settings_from_clientid_fn(metadata_rec.id::INTEGER);
RAISE DEBUG '-> debug monitor_offline get_user_settings_from_metadata_fn [%]', user_settings;
-- Send notification
--PERFORM send_notification_fn('monitor_offline'::TEXT, metadata_rec::RECORD);
PERFORM send_email_py_fn('monitor_offline'::TEXT, user_settings::JSONB, app_settings::JSONB);
--PERFORM send_pushover_py_fn('monitor_offline'::TEXT, user_settings::JSONB, app_settings::JSONB);
-- log/insert/update process_queue table with processed
INSERT INTO process_queue
(channel, payload, stored, processed)
VALUES
('monitoring_offline', metadata_rec.id, metadata_rec.interval, now())
RETURNING id INTO process_id;
RAISE NOTICE '-> updated process_queue table [%]', process_id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_monitor_offline_fn
IS 'init by pg_cron to monitor offline pending notification, if so perform send_email o send_pushover base on user preferences';
-- CRON for monitor back online pending notification
DROP FUNCTION IF EXISTS cron_process_monitor_online_fn;
CREATE FUNCTION cron_process_monitor_online_fn() RETURNS void AS $$
declare
process_rec record;
metadata_rec record;
user_settings jsonb;
app_settings jsonb;
begin
-- Check for monitor online pending notification
RAISE NOTICE 'cron_process_monitor_online_fn';
FOR process_rec in
SELECT * from process_queue
where channel = 'monitoring_online' and processed is null
order by stored asc
LOOP
RAISE NOTICE '-> cron_process_monitor_online_fn metadata_id [%]', process_rec.payload;
SELECT * INTO metadata_rec
FROM api.metadata
WHERE id = process_rec.payload::INTEGER;
-- Gather email and pushover app settings
app_settings = get_app_settings_fn();
-- Gather user settings
user_settings := get_user_settings_from_metadata_fn(metadata_rec.id::INTEGER);
--user_settings := get_user_settings_from_clientid_fn((metadata_rec.client_id::INTEGER, );
RAISE NOTICE '-> debug monitor_online get_user_settings_from_metadata_fn [%]', user_settings;
-- Send notification
--PERFORM send_notification_fn('monitor_online'::TEXT, metadata_rec::RECORD);
PERFORM send_email_py_fn('monitor_online'::TEXT, user_settings::JSONB, app_settings::JSONB);
--PERFORM send_pushover_py_fn('monitor_online'::TEXT, user_settings::JSONB, app_settings::JSONB);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_monitor_online_fn
IS 'init by pg_cron to monitor back online pending notification, if so perform send_email or send_pushover base on user preferences';
-- CRON for new account pending notification
CREATE FUNCTION cron_process_new_account_fn() RETURNS void AS $$
declare
process_rec record;
begin
-- Check for new account pending update
RAISE NOTICE 'cron_process_new_account_fn';
FOR process_rec in
SELECT * from process_queue
where channel = 'new_account' and processed is null
order by stored asc
LOOP
RAISE NOTICE '-> cron_process_new_account_fn [%]', process_rec.payload;
-- update account
PERFORM process_account_queue_fn(process_rec.payload::TEXT);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_account_fn
IS 'init by pg_cron to check for new account pending update, if so perform process_account_queue_fn';
-- CRON for new vessel pending notification
CREATE FUNCTION cron_process_new_vessel_fn() RETURNS void AS $$
declare
process_rec record;
begin
-- Check for new vessel pending update
RAISE NOTICE 'cron_process_new_vessel_fn';
FOR process_rec in
SELECT * from process_queue
where channel = 'new_vessel' and processed is null
order by stored asc
LOOP
RAISE NOTICE '-> cron_process_new_vessel_fn [%]', process_rec.payload;
-- update vessel
PERFORM process_vessel_queue_fn(process_rec.payload::TEXT);
-- update process_queue entry as processed
UPDATE process_queue
SET
processed = NOW()
WHERE id = process_rec.id;
RAISE NOTICE '-> updated process_queue table [%]', process_rec.id;
END LOOP;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_process_new_vessel_fn
IS 'init by pg_cron to check for new vessel pending update, if so perform process_vessel_queue_fn';
-- CRON for Vacuum database
CREATE FUNCTION cron_vaccum_fn() RETURNS void AS $$
declare
begin
-- Vacuum
RAISE NOTICE 'cron_vaccum_fn';
VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.logbook;
VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.stays;
VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.moorages;
VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.metrics;
VACUUM (FULL, VERBOSE, ANALYZE, INDEX_CLEANUP) api.metadata;
END;
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
public.cron_vaccum_fn
IS 'init by pg_cron to full vaccum tables on schema api';

View File

@@ -0,0 +1,910 @@
---------------------------------------------------------------------------
-- singalk db public schema
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
CREATE SCHEMA IF NOT EXISTS public;
COMMENT ON SCHEMA public IS 'backend functions';
---------------------------------------------------------------------------
-- python reverse_geocode
--
-- https://github.com/CartoDB/labs-postgresql/blob/master/workshop/plpython.md
--
CREATE TABLE IF NOT EXISTS geocoders(
name TEXT UNIQUE,
url TEXT,
reverse_url TEXT
);
-- Description
COMMENT ON TABLE
public.geocoders
IS 'geo service nominatim url';
INSERT INTO geocoders VALUES
('nominatim',
NULL,
'https://nominatim.openstreetmap.org/reverse');
DROP FUNCTION IF EXISTS reverse_geocode_py_fn;
CREATE OR REPLACE FUNCTION reverse_geocode_py_fn(IN geocoder TEXT, IN lon NUMERIC, IN lat NUMERIC,
OUT geo_name TEXT)
AS $reverse_geocode_py$
import requests
# Use the shared cache to avoid preparing the geocoder metadata
if geocoder in SD:
plan = SD[geocoder]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT reverse_url AS url FROM geocoders WHERE name = $1", ["text"])
SD[geocoder] = plan
# Execute the statement with the geocoder param and limit to 1 result
rv = plpy.execute(plan, [geocoder], 1)
url = rv[0]['url']
# Make the request to the geocoder API
payload = {"lon": lon, "lat": lat, "format": "jsonv2", "zoom": 18}
r = requests.get(url, params=payload)
# Return the full address or nothing if not found
if r.status_code == 200 and "name" in r.json():
return r.json()["name"]
else:
plpy.error('Failed to received a geo full address %s', r.json())
return 'unknow'
$reverse_geocode_py$ LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.reverse_geocode_py_fn
IS 'query reverse geo service to return location name';
---------------------------------------------------------------------------
-- python template email/pushover
--
CREATE TABLE IF NOT EXISTS email_templates(
name TEXT UNIQUE,
email_subject TEXT,
email_content TEXT,
pushover_title TEXT,
pushover_message TEXT
);
-- Description
COMMENT ON TABLE
public.email_templates
IS 'email/message templates for notifications';
-- with escape value, eg: E'A\nB\r\nC'
-- https://stackoverflow.com/questions/26638615/insert-line-break-in-postgresql-when-updating-text-field
INSERT INTO email_templates VALUES
('logbook',
'New Logbook Entry',
E'Hello __RECIPIENT__,\n\nWe just wanted to let you know that you have a new entry on openplotter.cloud: "__LOGBOOK_NAME__"\r\n\r\nSee more details at __LOGBOOK_LINK__\n\nHappy sailing!\nThe Saillogger Team',
'New Logbook Entry',
E'We just wanted to let you know that you have a new entry on openplotter.cloud: "__LOGBOOK_NAME__"\r\n\r\nSee more details at __LOGBOOK_LINK__\n\nHappy sailing!\nThe Saillogger Team'),
('user',
'Welcome',
E'Hello __RECIPIENT__,\nCongratulations!\nYou successfully created an account.\nKeep in mind to register your vessel.\nHappy sailing!',
'Welcome',
E'Hi!\nYou successfully created an account\nKeep in mind to register your vessel.\nHappy sailing!'),
('vessel',
'New vessel',
E'Hi!\nHow are you?\n__BOAT__ is now linked to your account.',
'New vessel',
E'Hi!\nHow are you?\n__BOAT__ is now linked to your account.'),
('monitor_offline',
'Offline',
E'__BOAT__ has been offline for more than an hour\r\nFind more details at https://openplotter.cloud/boats/\n',
'Offline',
E'__BOAT__ has been offline for more than an hour\r\nFind more details at https://openplotter.cloud/boats/\n'),
('monitor_online',
'Online',
E'__BOAT__ just came online\nFind more details at https://openplotter.cloud/boats/\n',
'Online',
E'__BOAT__ just came online\nFind more details at https://openplotter.cloud/boats/\n'),
('badge',
'New Badge!',
E'Hello __RECIPIENT__,\nCongratulations! You have just unlocked a new badge: __BADGE_NAME__\nSee more details at https://openplotter.cloud/badges\nHappy sailing!\nThe Saillogger Team',
'New Badge!',
E'Congratulations!\nYou have just unlocked a new badge: __BADGE_NAME__\nSee more details at https://openplotter.cloud/badges\nHappy sailing!\nThe Saillogger Team');
---------------------------------------------------------------------------
-- python send email
--
-- TODO read table from python or send email data as params?
-- https://www.programcreek.com/python/example/3684/email.utils.formatdate
DROP FUNCTION IF EXISTS send_email_py_fn;
CREATE OR REPLACE FUNCTION send_email_py_fn(IN email_type TEXT, IN _user JSONB, IN app JSONB) RETURNS void
AS $send_email_py$
# Import smtplib for the actual sending function
import smtplib
# Import the email modules we need
#from email.message import EmailMessage
from email.utils import formatdate,make_msgid
from email.mime.text import MIMEText
# Use the shared cache to avoid preparing the email metadata
if email_type in SD:
plan = SD[email_type]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT * FROM email_templates WHERE name = $1", ["text"])
SD[email_type] = plan
# Execute the statement with the email_type param and limit to 1 result
rv = plpy.execute(plan, [email_type], 1)
email_subject = rv[0]['email_subject']
email_content = rv[0]['email_content']
# Replace fields using input jsonb obj
plpy.notice('Parameters [{}] [{}]'.format(_user, app))
if not _user or not app:
plpy.error('Error no parameters')
return None
if 'logbook_name' in _user and _user['logbook_name']:
email_content = email_content.replace('__LOGBOOK_NAME__', _user['logbook_name'])
if 'recipient' in _user and _user['recipient']:
email_content = email_content.replace('__RECIPIENT__', _user['recipient'])
if 'boat' in _user and _user['boat']:
email_content = email_content.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
email_content = email_content.replace('__BADGE_NAME__', _user['badge'])
email_from = 'root@localhost'
if 'app.email_from' in app and app['app.email_from']:
email_from = app['app.email_from']
#plpy.notice('Sending email from [{}] [{}]'.format(email_from, app['app.email_from']))
email_to = 'root@localhost'
if 'email' in _user and _user['email']:
email_to = _user['email']
#plpy.notice('Sending email to [{}] [{}]'.format(email_to, _user['email']))
else:
plpy.error('Error email to')
return None
msg = MIMEText(email_content, 'plain', 'utf-8')
msg["Subject"] = email_subject
msg["From"] = email_from
msg["To"] = email_to
msg["Date"] = formatdate()
msg["Message-ID"] = make_msgid()
server_smtp = 'localhost'
if 'app.email_server' in app and app['app.email_server']:
server_smtp = app['app.email_server']
# Send the message via our own SMTP server.
try:
# send your message with credentials specified above
with smtplib.SMTP(server_smtp, 25) as server:
if 'app.email_user' in app and app['app.email_user'] \
and 'app.email_pass' in app and app['app.email_pass']:
server.starttls()
server.login(app['app.email_user'], app['app.email_pass'])
#server.send_message(msg)
server.sendmail(msg["To"], msg["From"], msg.as_string())
server.quit()
# tell the script to report if your message was sent or which errors need to be fixed
plpy.notice('Sent email successfully to [{}] [{}]'.format(msg["To"], msg["Subject"]))
return None
except OSError as error :
plpy.error(error)
except smtplib.SMTPConnectError:
plpy.error('Failed to connect to the server. Bad connection settings?')
except smtplib.SMTPServerDisconnected:
plpy.error('Failed to connect to the server. Wrong user/password?')
except smtplib.SMTPException as e:
plpy.error('SMTP error occurred: ' + str(e))
$send_email_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.send_email_py_fn
IS 'Send email notification using plpython3u';
---------------------------------------------------------------------------
-- python send pushover
--
-- TODO read app and user key from table?
-- https://pushover.net/
DROP FUNCTION IF EXISTS send_pushover_py_fn;
CREATE OR REPLACE FUNCTION send_pushover_py_fn(IN message_type TEXT, IN _user JSONB, IN app JSONB) RETURNS void
AS $send_pushover_py$
import requests
# Use the shared cache to avoid preparing the email metadata
if message_type in SD:
plan = SD[message_type]
# A prepared statement from Python
else:
plan = plpy.prepare("SELECT * FROM email_templates WHERE name = $1", ["text"])
SD[message_type] = plan
# Execute the statement with the message_type param and limit to 1 result
rv = plpy.execute(plan, [message_type], 1)
pushover_title = rv[0]['pushover_title']
pushover_message = rv[0]['pushover_message']
# Replace fields using input jsonb obj
if 'logbook_name' in _user and _user['logbook_name']:
pushover_message = pushover_message.replace('__LOGBOOK_NAME__', _user['logbook_name'])
if 'recipient' in _user and _user['recipient']:
pushover_message = pushover_message.replace('__RECIPIENT__', _user['recipient'])
if 'boat' in _user and _user['boat']:
pushover_message = pushover_message.replace('__BOAT__', _user['boat'])
if 'badge' in _user and _user['badge']:
pushover_message = pushover_message.replace('__BADGE_NAME__', _user['badge'])
pushover_token = None
if 'app.pushover_token' in app and app['app.pushover_token']:
pushover_token = app['app.pushover_token']
else:
plpy.error('Error no pushover token defined, check app settings')
return None
pushover_user = None
if 'pushover_key' in _user and _user['pushover_key']:
pushover_user = _user['pushover_key']
else:
plpy.error('Error no pushover user token defined, check user settings')
return None
# requests
r = requests.post("https://api.pushover.net/1/messages.json", data = {
"token": pushover_token,
"user": pushover_user,
"title": pushover_title,
"message": pushover_message
})
#print(r.text)
# Return the full address or None if not found
plpy.notice('Sent pushover successfully to [{}] [{}]'.format(r.text, r.status_code))
if r.status_code == 200:
plpy.notice('Sent pushover successfully to [{}] [{}] [{}]'.format("__USER__", pushover_title, r.text))
else:
plpy.error('Failed to send pushover')
return None
$send_pushover_py$ TRANSFORM FOR TYPE jsonb LANGUAGE plpython3u;
-- Description
COMMENT ON FUNCTION
public.send_pushover_py_fn
IS 'Send pushover notification using plpython3u';
---------------------------------------------------------------------------
-- Functions public schema
--
-- Update a logbook with avg data
-- TODO using timescale function
CREATE OR REPLACE FUNCTION logbook_update_avg_fn(
IN _id integer,
IN _start TEXT,
IN _end TEXT,
OUT avg_speed double precision,
OUT max_speed double precision,
OUT max_wind_speed double precision
) AS $logbook_update_avg$
BEGIN
RAISE NOTICE '-> Updating avg for logbook id=%, start: "%", end: "%"', _id, _start, _end;
SELECT AVG(speedOverGround), MAX(speedOverGround), MAX(windspeedapparent) INTO
avg_speed, max_speed, max_wind_speed
FROM api.metrics
WHERE time >= _start::TIMESTAMP WITHOUT TIME ZONE AND
time <= _end::TIMESTAMP WITHOUT TIME ZONE;
RAISE NOTICE '-> Updated avg for logbook id=%, avg_speed:%, max_speed:%, max_wind_speed:%', _id, avg_speed, max_speed, max_wind_speed;
END;
$logbook_update_avg$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_update_avg_fn
IS 'Update logbook details with calculate average and max data, AVG(speedOverGround), MAX(speedOverGround), MAX(windspeedapparent)';
-- Create a LINESTRING for Geometry
-- Todo validate st_length unit?
-- https://postgis.net/docs/ST_Length.html
CREATE FUNCTION logbook_update_geom_distance_fn(IN _id integer, IN _start text, IN _end text,
OUT _track_geom Geometry(LINESTRING),
OUT _track_distance double precision
) AS $logbook_geo_distance$
BEGIN
SELECT ST_MakeLine(
ARRAY(
--SELECT ST_SetSRID(ST_MakePoint(longitude,latitude),4326) as geo_point
SELECT st_makepoint(longitude,latitude) AS geo_point
FROM api.metrics m
WHERE m.latitude IS NOT NULL
AND m.longitude IS NOT NULL
AND m.time >= _start::TIMESTAMP WITHOUT TIME ZONE
AND m.time <= _end::TIMESTAMP WITHOUT TIME ZONE
ORDER BY m.time ASC
)
) INTO _track_geom;
RAISE NOTICE '-> GIS LINESTRING %', _track_geom;
-- SELECT ST_Length(_track_geom,false) INTO _track_distance;
-- SELECT TRUNC (st_length(st_transform(track_geom,4326)::geography)::INT / 1.852) from logbook where id = 209; -- in NM
SELECT TRUNC (ST_Length(_track_geom,false)::INT / 1.852) INTO _track_distance; -- in NM
RAISE NOTICE '-> GIS Length %', _track_distance;
END;
$logbook_geo_distance$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.logbook_update_geom_distance_fn
IS 'Update logbook details with geometry data an distance, ST_Length';
-- Update pending new logbook from process queue
DROP FUNCTION IF EXISTS process_logbook_queue_fn;
CREATE OR REPLACE FUNCTION process_logbook_queue_fn(IN _id integer) RETURNS void AS $process_logbook_queue$
DECLARE
logbook_rec record;
from_name varchar;
to_name varchar;
log_name varchar;
avg_rec record;
geo_rec record;
user_settings jsonb;
app_settings jsonb;
BEGIN
-- If _id is not NULL
SELECT * INTO logbook_rec
FROM api.logbook
WHERE active IS false
AND id = _id;
-- geo reverse _from_lng _from_lat
-- geo reverse _to_lng _to_lat
from_name := reverse_geocode_py_fn('nominatim', logbook_rec._from_lng::NUMERIC, logbook_rec._from_lat::NUMERIC);
to_name := reverse_geocode_py_fn('nominatim', logbook_rec._to_lng::NUMERIC, logbook_rec._to_lat::NUMERIC);
SELECT CONCAT(from_name, ' to ' , to_name) INTO log_name;
-- SELECT CONCAT("_from" , ' to ' ,"_to") from api.logbook where id = 1;
-- Generate logbook name, concat _from_location and to _to_locacion
-- Update logbook entry with the latest metric data and calculate data
avg_rec := logbook_update_avg_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
geo_rec := logbook_update_geom_distance_fn(logbook_rec.id, logbook_rec._from_time::TEXT, logbook_rec._to_time::TEXT);
-- todo check on time start vs end
RAISE NOTICE 'Updating logbook entry [%] [%] [%]', logbook_rec.id, logbook_rec._from_time, logbook_rec._to_time;
UPDATE api.logbook
SET
duration = (logbook_rec._to_time::timestamp without time zone - logbook_rec._from_time::timestamp without time zone),
avg_speed = avg_rec.avg_speed,
max_speed = avg_rec.max_speed,
max_wind_speed = avg_rec.max_wind_speed,
_from = from_name,
_to = to_name,
name = log_name,
track_geom = geo_rec._track_geom,
distance = geo_rec._track_distance
WHERE id = logbook_rec.id;
-- Gather email and pushover app settings
app_settings := get_app_settings_fn();
-- Gather user settings
user_settings := get_user_settings_from_log_fn(logbook_rec::RECORD);
--user_settings := '{"logbook_name": "' || log_name || '"}, "{"email": "' || account_rec.email || '", "recipient": "' || account_rec.first || '}';
--user_settings := '{"logbook_name": "' || log_name || '"}';
-- Send notification email, pushover
--PERFORM send_notification('logbook'::TEXT, logbook_rec::RECORD);
PERFORM send_email_py_fn('logbook'::TEXT, user_settings::JSONB, app_settings::JSONB);
--PERFORM send_pushover_py_fn('logbook'::TEXT, user_settings::JSONB, app_settings::JSONB);
END;
$process_logbook_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_logbook_queue_fn
IS 'Update logbook details when completed, logbook_update_avg_fn, logbook_update_geom_distance_fn, reverse_geocode_py_fn';
-- Update pending new stay from process queue
DROP FUNCTION IF EXISTS process_stay_queue_fn;
CREATE OR REPLACE FUNCTION process_stay_queue_fn(IN _id integer) RETURNS void AS $process_stay_queue$
DECLARE
stay_rec record;
_name varchar;
BEGIN
RAISE WARNING 'process_stay_queue_fn';
RAISE WARNING 'jwt %', current_setting('request.jwt.claims', true);
RAISE WARNING 'cur_user %', current_user;
-- If _id is not NULL
SELECT * INTO stay_rec
FROM api.stays
WHERE id = _id;
-- AND client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%';
-- geo reverse _lng _lat
_name := reverse_geocode_py_fn('nominatim', stay_rec.longitude::NUMERIC, stay_rec.latitude::NUMERIC);
RAISE NOTICE 'Updating stay entry [%]', stay_rec.id;
UPDATE api.stays
SET
name = _name,
geog = Geography(ST_MakePoint(stay_rec.longitude, stay_rec.latitude))
WHERE id = stay_rec.id;
-- Notification email/pushover?
END;
$process_stay_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_stay_queue_fn
IS 'Update stay details, reverse_geocode_py_fn';
-- Handle moorage insert or update from stays
-- todo valide geography unit
-- https://postgis.net/docs/ST_DWithin.html
DROP FUNCTION IF EXISTS process_moorage_queue_fn;
CREATE OR REPLACE FUNCTION process_moorage_queue_fn(IN _id integer) RETURNS void AS $process_moorage_queue$
DECLARE
stay_rec record;
moorage_rec record;
BEGIN
-- If _id is not NULL
SELECT * INTO stay_rec
FROM api.stays
WHERE active IS false
AND departed IS NOT NULL
AND id = _id;
FOR moorage_rec in
SELECT
*
FROM api.moorages
WHERE
latitude IS NOT NULL
AND longitude IS NOT NULL
AND ST_DWithin(
-- Geography(ST_MakePoint(stay_rec._lng, stay_rec._lat)),
stay_rec.geog,
-- Geography(ST_MakePoint(longitude, latitude)),
geog,
100 -- in meters ?
)
ORDER BY id ASC
LOOP
-- found previous stay within 100m of the new moorage
IF moorage_rec.id IS NOT NULL AND moorage_rec.id > 0 THEN
RAISE NOTICE 'Found previous stay within 100m of moorage %', moorage_rec;
EXIT; -- exit loop
END IF;
END LOOP;
-- if with in 100m update reference count and stay duration
-- else insert new entry
IF moorage_rec.id IS NOT NULL AND moorage_rec.id > 0 THEN
RAISE NOTICE 'Update moorage %', moorage_rec;
UPDATE api.moorages
SET
reference_count = moorage_rec.reference_count + 1,
stay_duration =
moorage_rec.stay_duration +
(stay_rec.departed::timestamp without time zone - stay_rec.arrived::timestamp without time zone)
WHERE id = moorage_rec.id;
else
RAISE NOTICE 'Insert new moorage entry from stay %', stay_rec;
-- Ensure the stay as a name
IF stay_rec.name IS NULL THEN
stay_rec.name := reverse_geocode_py_fn('nominatim', stay_rec.longitude::NUMERIC, stay_rec.latitude::NUMERIC);
END IF;
-- Insert new moorage from stay
INSERT INTO api.moorages
(client_id, name, stay_id, stay_code, stay_duration, reference_count, latitude, longitude, geog)
VALUES (
stay_rec.client_id,
stay_rec.name,
stay_rec.id,
stay_rec.stay_code,
(stay_rec.departed::timestamp without time zone - stay_rec.arrived::timestamp without time zone),
1, -- default reference_count
stay_rec.latitude,
stay_rec.longitude,
Geography(ST_MakePoint(stay_rec.longitude, stay_rec.latitude))
);
END IF;
END;
$process_moorage_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_moorage_queue_fn
IS 'Handle moorage insert or update from stays';
-- process new account notification
DROP FUNCTION IF EXISTS process_account_queue_fn;
CREATE OR REPLACE FUNCTION process_account_queue_fn(IN _email TEXT) RETURNS void AS $process_account_queue$
DECLARE
account_rec record;
user_settings jsonb;
app_settings jsonb;
BEGIN
-- If _email is not NULL
SELECT * INTO account_rec
FROM auth.accounts
WHERE email = _email;
-- Gather email and pushover app settings
app_settings := get_app_settings_fn();
-- Gather user settings
user_settings := '{"email": "' || account_rec.email || '", "recipient": "' || account_rec.first || '"}';
-- Send notification email, pushover
--PERFORM send_notification_fn('user'::TEXT, account_rec::RECORD);
PERFORM send_email_py_fn('user'::TEXT, user_settings::JSONB, app_settings::JSONB);
--PERFORM send_pushover_py_fn('user'::TEXT, user_settings::JSONB, app_settings::JSONB);
END;
$process_account_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_account_queue_fn
IS 'process new account notification';
-- process new vessel notification
DROP FUNCTION IF EXISTS process_vessel_queue_fn;
CREATE OR REPLACE FUNCTION process_vessel_queue_fn(IN _email TEXT) RETURNS void AS $process_vessel_queue$
DECLARE
vessel_rec record;
user_settings jsonb;
app_settings jsonb;
BEGIN
-- If _email is not NULL
SELECT * INTO vessel_rec
FROM auth.vessels
WHERE owner_email = _email;
-- Gather user_settings from
-- if notification email
-- -- Send email
--
-- Gather email and pushover app settings
app_settings := get_app_settings_fn();
-- Gather user settings
user_settings := '{"email": "' || vessel_rec.owner_email || '", "boat": "' || vessel_rec.name || '"}';
--user_settings := get_user_settings_from_clientid_fn();
-- Send notification email, pushover
--PERFORM send_notification_fn('vessel'::TEXT, vessel_rec::RECORD);
PERFORM send_email_py_fn('vessel'::TEXT, user_settings::JSONB, app_settings::JSONB);
--PERFORM send_pushover_py_fn('vessel'::TEXT, user_settings::JSONB, app_settings::JSONB);
END;
$process_vessel_queue$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.process_vessel_queue_fn
IS 'process new vessel notification';
-- Get user settings details from a log entry
DROP FUNCTION IF EXISTS get_app_settings_fn;
CREATE OR REPLACE FUNCTION get_app_settings_fn(OUT app_settings JSON) RETURNS JSON
AS $get_app_settings$
DECLARE
BEGIN
SELECT jsonb_object_agg(name,value) INTO app_settings
FROM public.app_settings
WHERE name LIKE '%app.email%' OR name LIKE '%app.pushover%';
END;
$get_app_settings$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.get_app_settings_fn
IS 'get app settings details, email, pushover';
-- Get user settings details from a log entry
DROP FUNCTION IF EXISTS get_user_settings_from_log_fn;
CREATE OR REPLACE FUNCTION get_user_settings_from_log_fn(IN logbook_rec RECORD, OUT user_settings JSON) RETURNS JSON
AS $get_user_settings_from_log$
DECLARE
BEGIN
-- If client_id is not NULL
IF logbook_rec.client_id IS NULL OR logbook_rec.client_id = '' THEN
RAISE WARNING '-> get_user_settings_from_log_fn invalid input %', logbook_rec.client_id;
END IF;
SELECT
json_build_object(
'boat' , v.name,
'recipient', a.first,
'email', v.owner_email,
'logbook_name', l.name) INTO user_settings
FROM auth.accounts a, auth.vessels v, api.metadata m, api.logbook l
WHERE lower(a.email) = lower(v.owner_email)
-- AND lower(v.name) = lower(m.name)
AND m.client_id = l.client_id
AND l.client_id = logbook_rec.client_id
AND l.id = logbook_rec.id;
END;
$get_user_settings_from_log$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.get_user_settings_from_log_fn
IS 'get user settings details from a log entry, initiate for logbook entry notification';
-- Get user settings details from a metadata entry
DROP FUNCTION IF EXISTS get_user_settings_from_metadata;
CREATE OR REPLACE FUNCTION get_user_settings_from_metadata_fn(IN meta_id INTEGER, OUT user_settings JSON) RETURNS JSON
AS $get_user_settings_from_metadata$
DECLARE
BEGIN
-- If meta_id is not NULL
IF meta_id IS NULL OR meta_id < 1 THEN
RAISE WARNING '-> get_user_settings_from_metadata_fn invalid input %', meta_id;
END IF;
SELECT json_build_object(
'boat' , v.name,
'email', v.owner_email) INTO user_settings
FROM auth.vessels v, api.metadata m
WHERE
--lower(v.name) = lower(m.name) AND
m.id = meta_id;
END;
$get_user_settings_from_metadata$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.get_user_settings_from_metadata_fn
IS 'get user settings details from a metadata entry, initiate for monitoring offline,online notification';
-- Get user settings details from a metadata entry
DROP FUNCTION IF EXISTS send_notification_fn;
CREATE OR REPLACE FUNCTION send_notification_fn(IN email_type TEXT, IN notification_rec RECORD) RETURNS JSON
AS $send_notification$
DECLARE
app_settings JSONB;
BEGIN
-- Gather email and pushover app settings
app_settings := get_app_settings_fn();
-- Gather user settings
--user_settings := '{"email": "' || vessel_rec.owner_email || '", "boat": "' || vessel_rec.name || '}';
--user_settings := get_user_settings_from_clientid_fn();
--user_settings := '{"email": "' || account_rec.email || '", "recipient": "' || account_rec.first || '}';
--user_settings := get_user_settings_from_metadata_fn();
--user_settings := '{"logbook_name": "' || log_name || '"}';
--user_settings := get_user_settings_from_log_fn();
-- Send notification email
PERFORM send_email_py_fn(email_type::TEXT, user_settings::JSONB, app_settings::JSONB);
-- Send notification pushover
--PERFORM send_pushover_py_fn(email_type::TEXT, user_settings::JSONB, app_settings::JSONB);
END;
$send_notification$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.send_notification_fn
IS 'TODO';
DROP FUNCTION IF EXISTS get_user_settings_from_clientid_fn;
CREATE OR REPLACE FUNCTION get_user_settings_from_clientid_fn(
IN clientid TEXT,
IN logbook_name TEXT,
OUT user_settings JSON
) RETURNS JSON
AS $get_user_settings_from_clientid$
DECLARE
BEGIN
-- If client_id is not NULL
IF clientid IS NULL OR clientid <> '' THEN
RAISE WARNING '-> get_user_settings_from_clientid_fn invalid input %', clientid;
END IF;
SELECT
json_build_object(
'boat' , v.name,
'recipient', a.first,
'email', v.owner_email ,
'settings', a.preferences,
'pushover_key', a.preferences->'pushover_key',
'badges', a.preferences->'badges',
'logbook_name', logbook_name ) INTO user_settings
FROM auth.accounts a, auth.vessels v, api.metadata m
WHERE lower(a.email) = lower(v.owner_email)
--AND lower(v.name) = lower(m.name)
AND m.mmsi = v.mmsi
AND m.client_id = clientid;
END;
$get_user_settings_from_clientid$ LANGUAGE plpgsql;
-- Description
COMMENT ON FUNCTION
public.get_user_settings_from_clientid_fn
IS 'get user settings details from a clientid, initiate for badge entry notification';
---------------------------------------------------------------------------
-- Queue handling
--
-- https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
-- https://www.depesz.com/2012/06/13/how-to-send-mail-from-database/
-- Listen/Notify way
--create function new_logbook_entry() returns trigger as $$
--begin
-- perform pg_notify('new_logbook_entry', NEW.id::text);
-- return NEW;
--END;
--$$ language plpgsql;
-- table way
CREATE TABLE IF NOT EXISTS public.process_queue (
id SERIAL PRIMARY KEY,
channel TEXT NOT NULL,
payload TEXT NOT NULL,
stored timestamptz NOT NULL,
processed timestamptz
);
COMMENT ON TABLE
public.process_queue
IS 'process queue for async job';
CREATE INDEX ON public.process_queue (channel);
CREATE INDEX ON public.process_queue (processed);
create function new_account_entry_fn() returns trigger as $new_account_entry$
begin
insert into process_queue (channel, payload, stored) values ('new_account', NEW.email, now());
return NEW;
END;
$new_account_entry$ language plpgsql;
create function new_vessel_entry_fn() returns trigger as $new_vessel_entry$
begin
insert into process_queue (channel, payload, stored) values ('new_vessel', NEW.owner_email, now());
return NEW;
END;
$new_vessel_entry$ language plpgsql;
---------------------------------------------------------------------------
-- App settings
-- https://dba.stackexchange.com/questions/27296/storing-application-settings-with-different-datatypes#27297
-- https://stackoverflow.com/questions/6893780/how-to-store-site-wide-settings-in-a-database
-- http://cvs.savannah.gnu.org/viewvc/*checkout*/gnumed/gnumed/gnumed/server/sql/gmconfiguration.sql
CREATE TABLE IF NOT EXISTS public.app_settings (
name TEXT NOT NULL UNIQUE,
value TEXT NOT NULL
);
COMMENT ON TABLE public.app_settings IS 'application settings';
COMMENT ON COLUMN public.app_settings.name IS 'application settings name key';
COMMENT ON COLUMN public.app_settings.value IS 'application settings value';
---------------------------------------------------------------------------
-- Badges descriptions
-- TODO add contiditions
--
CREATE TABLE IF NOT EXISTS badges(
name TEXT UNIQUE,
description TEXT
);
-- Description
COMMENT ON TABLE
public.badges
IS 'Badges descriptions';
INSERT INTO badges VALUES
('Helmsman',
'Nice work logging your first sail! You are officially a helmsman now!'),
('Wake Maker',
'Yowzers! Welcome to the 15 knot+ club ya speed demon skipper!'),
('Explorer',
'It looks like home is where the helm is. Cheers to 10 days away from home port!'),
('Mooring Pro',
'It takes a lot of skill to "thread that floating needle" but seems like you have mastered mooring with 10 nights on buoy!'),
('Anchormaster',
'Hook, line and sinker, you have this anchoring thing down! 25 days on the hook for you!'),
('Traveler',
'Who needs to fly when one can sail! You are an international sailor. À votre santé!'),
('Stormtrooper',
'Just like the elite defenders of the Empire, here you are, our braving your own hydro-empire in windspeeds above 30kts. Nice work trooper! '),
('Club Alaska',
'Home to the bears, glaciers, midnight sun and high adventure. Welcome to the Club Alaska Captain!'),
('Tropical Traveler',
'Look at you with your suntan, tropical drink and southern latitude!'),
('Aloha Award',
'Ticking off over 2300 NM across the great blue Pacific makes you the rare recipient of the Aloha Award. Well done and Aloha sailor!'),
('Tyee',
'You made it to the Tyee Outstation, the friendliest dock in Pacific Northwest!'),
-- TODO the sea is big and the world is not limited to the US
('Mediterranean Traveler',
'You made it trought the Mediterranean!');
create function public.process_badge_queue_fn() RETURNS void AS $process_badge_queue$
declare
badge_rec record;
badges_arr record;
begin
SELECT json_array_elements_text((a.preferences->'badges')::json) from auth.accounts a;
FOR badge_rec in
SELECT
name
FROM badges
LOOP
-- found previous stay within 100m of the new moorage
IF moorage_rec.id IS NOT NULL AND moorage_rec.id > 0 THEN
RAISE NOTICE 'Found previous stay within 100m of moorage %', moorage_rec;
EXIT; -- exit loop
END IF;
END LOOP;
-- Helmsman
-- select count(l.id) api.logbook l where count(l.id) = 1;
-- Wake Maker
-- select max(l.max_wind_speed) api.logbook l where l.max_wind_speed >= 15;
-- Explorer
-- select sum(m.stay_duration) api.stays s where home_flag is false;
-- Mooring Pro
-- select sum(m.stay_duration) api.stays s where stay_code = 3;
-- Anchormaster
-- select sum(m.stay_duration) api.stays s where stay_code = 2;
-- Traveler
-- todo country to country.
-- Stormtrooper
-- select max(l.max_wind_speed) api.logbook l where l.max_wind_speed >= 30;
-- Club Alaska
-- todo country zone
-- Tropical Traveler
-- todo country zone
-- Aloha Award
-- todo pacific zone
-- TODO the sea is big and the world is not limited to the US
END
$process_badge_queue$ language plpgsql;
---------------------------------------------------------------------------
-- TODO add alert monitoring for Battery
---------------------------------------------------------------------------
-- TODO db-pre-request = "public.check_jwt"
-- Prevent unregister user or unregister vessel access
CREATE OR REPLACE FUNCTION public.check_jwt() RETURNS void AS $$
DECLARE
_role name;
_email name;
_mmsi name;
account_rec record;
vessel_rec record;
BEGIN
RAISE WARNING 'jwt %', current_setting('request.jwt.claims', true);
SELECT current_setting('request.jwt.claims', true)::json->>'email' INTO _email;
SELECT current_setting('request.jwt.claims', true)::json->>'role' INTO _role;
--RAISE WARNING 'jwt email %', current_setting('request.jwt.claims', true)::json->>'email';
--RAISE WARNING 'jwt role %', current_setting('request.jwt.claims', true)::json->>'role';
--RAISE WARNING 'cur_user %', current_user;
IF _role = 'user_role' THEN
-- Check the user exist in the accounts table
SELECT * INTO account_rec
FROM auth.accounts
WHERE auth.accounts.email = _email;
IF account_rec.email IS NULL THEN
RAISE EXCEPTION 'Invalid user'
USING HINT = 'Unkown user';
END IF;
ELSIF _role = 'vessel_role' THEN
-- Check the vessel and user exist
SELECT * INTO vessel_rec
FROM auth.vessels, auth.accounts
WHERE auth.vessels.owner_email = _email
AND auth.accounts.email = _email;
IF vessel_rec.owner_email IS NULL THEN
RAISE EXCEPTION 'Invalid vessel'
USING HINT = 'Unkown vessel owner_email';
END IF;
SELECT current_setting('request.jwt.claims', true)::json->>'mmsi' INTO _mmsi;
IF vessel_rec.mmsi IS NULL OR vessel_rec.mmsi <> _mmsi THEN
RAISE EXCEPTION 'Invalid vessel'
USING HINT = 'Unkown vessel mmsi';
END IF;
PERFORM set_config('vessel.mmsi', vessel_rec.mmsi, false);
RAISE WARNING 'vessel.mmsi %', current_setting('vessel.mmsi', false);
ELSIF _role <> 'api_anonymous' THEN
RAISE EXCEPTION 'Invalid role'
USING HINT = 'Stop being so evil and maybe you can log in';
END IF;
END
$$ language plpgsql security definer;
-- Function to trigger cron_jobs using API for tests.
-- Todo limit access and permision
-- Run con jobs
CREATE OR REPLACE FUNCTION api.run_cron_jobs() RETURNS void AS $$
BEGIN
-- In correct order
perform public.cron_process_new_account_fn();
perform public.cron_process_new_vessel_fn();
perform public.cron_process_monitor_online_fn();
perform public.cron_process_new_logbook_fn();
perform public.cron_process_new_stay_fn();
perform public.cron_process_new_moorage_fn();
perform public.cron_process_monitor_offline_fn();
END
$$ language plpgsql security definer;

View File

@@ -0,0 +1,221 @@
---------------------------------------------------------------------------
-- SQL User Management - Storing Users and Passwords and Vessels
-- We put things inside the auth schema to hide
-- them from public view. Certain public procs/views will
-- refer to helpers and tables inside.
-- base on https://postgrest.org/en/stable/auth.html#sql-user-management
-- List current database
select current_database();
-- connect to the DB
\c signalk
CREATE SCHEMA IF NOT EXISTS auth;
COMMENT ON SCHEMA auth IS 'auth postgrest for users and vessels';
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; -- provides functions to generate universally unique identifiers (UUIDs)
DROP TABLE IF EXISTS auth.accounts CASCADE;
CREATE TABLE IF NOT EXISTS auth.accounts (
-- id UUID DEFAULT uuid_generate_v4() NOT NULL,
email text primary key check ( email ~* '^.+@.+\..+$' ),
first text not null check (length(pass) < 512),
last text not null check (length(pass) < 512),
pass text not null check (length(pass) < 512),
role name not null check (length(role) < 512),
preferences JSONB null,
created_at TIMESTAMP WITHOUT TIME ZONE default NOW()
);
-- Preferences jsonb
---- PushOver Notification, bool
---- PushOver user key, varchar
---- Email notification, bool
---- Instagram Handle, varchar
---- Timezone, TZ
---- Unit, bool
---- Preferred Homepage
---- Website, varchar or text
---- Public Profile
---- References to users ?
DROP TABLE IF EXISTS auth.vessels;
CREATE TABLE IF NOT EXISTS auth.vessels (
-- vesselId UUID PRIMARY KEY REFERENCES auth.accounts(id) ON DELETE RESTRICT,
owner_email TEXT PRIMARY KEY REFERENCES auth.accounts(email) ON DELETE RESTRICT,
mmsi TEXT UNIQUE,
name TEXT,
-- owner_email TEXT,
pass UUID,
role name not null check (length(role) < 512),
created_at TIMESTAMP WITHOUT TIME ZONE default NOW()
);
create or replace function
auth.check_role_exists() returns trigger as $$
begin
if not exists (select 1 from pg_roles as r where r.rolname = new.role) then
raise foreign_key_violation using message =
'unknown database role: ' || new.role;
return null;
end if;
return new;
end
$$ language plpgsql;
-- trigger check role on account
drop trigger if exists ensure_user_role_exists on auth.accounts;
create constraint trigger ensure_user_role_exists
after insert or update on auth.accounts
for each row
execute procedure auth.check_role_exists();
-- trigger add queue new account
CREATE TRIGGER new_account_entry AFTER INSERT ON auth.accounts
FOR EACH ROW EXECUTE FUNCTION public.new_account_entry_fn();
-- trigger check role on vessel
drop trigger if exists ensure_user_role_exists on auth.vessels;
create constraint trigger ensure_user_role_exists
after insert or update on auth.vessels
for each row
execute procedure auth.check_role_exists();
-- trigger add queue new vessel
CREATE TRIGGER new_vessel_entry AFTER INSERT ON auth.vessels
FOR EACH ROW EXECUTE FUNCTION public.new_vessel_entry_fn();
create extension if not exists pgcrypto;
create or replace function
auth.encrypt_pass() returns trigger as $$
begin
if tg_op = 'INSERT' or new.pass <> old.pass then
new.pass = crypt(new.pass, gen_salt('bf'));
end if;
return new;
end
$$ language plpgsql;
-- Description
COMMENT ON FUNCTION
auth.encrypt_pass
IS 'encrypt user pass on insert or update';
drop trigger if exists encrypt_pass on auth.accounts;
create trigger encrypt_pass
before insert or update on auth.accounts
for each row
execute procedure auth.encrypt_pass();
create or replace function
auth.user_role(email text, pass text) returns name
language plpgsql
as $$
begin
return (
select role from auth.accounts
where accounts.email = user_role.email
and accounts.pass = crypt(user_role.pass, accounts.pass)
);
end;
$$;
-- add type
CREATE TYPE auth.jwt_token AS (
token text
);
---------------------------------------------------------------------------
-- API account helper functions
--
-- login should be on your exposed schema
create or replace function
api.login(in email text, in pass text) returns auth.jwt_token as $$
declare
_role name;
result auth.jwt_token;
app_jwt_secret text;
begin
-- check email and password
select auth.user_role(email, pass) into _role;
if _role is null then
raise invalid_password using message = 'invalid user or password';
end if;
-- Get app_jwt_secret
SELECT value INTO app_jwt_secret
FROM app_settings
WHERE name = 'app.jwt_secret';
select jwt.sign(
-- row_to_json(r), ''
-- row_to_json(r)::json, current_setting('app.jwt_secret')::text
row_to_json(r)::json, app_jwt_secret
) as token
from (
select _role as role, login.email as email,
extract(epoch from now())::integer + 60*60 as exp
) r
into result;
return result;
end;
$$ language plpgsql security definer;
-- signup should be on your exposed schema
create or replace function
api.signup(in email text, in pass text, in firstname text, in lastname text) returns auth.jwt_token as $$
declare
_role name;
begin
-- check email and password
select auth.user_role(email, pass) into _role;
if _role is null then
RAISE WARNING 'Register new account email:[%]', email;
INSERT INTO auth.accounts ( email, pass, first, last, role)
VALUES (email, pass, firstname, lastname, 'user_role');
end if;
return ( api.login(email, pass) );
end;
$$ language plpgsql security definer;
---------------------------------------------------------------------------
-- API vessel helper functions
-- register_vessel should be on your exposed schema
create or replace function
api.register_vessel(in vessel_email text, in vessel_mmsi text, in vessel_name text) returns auth.jwt_token as $$
declare
result auth.jwt_token;
app_jwt_secret text;
vessel_rec record;
begin
-- check vessel exist
SELECT * INTO vessel_rec
FROM auth.vessels vessel
WHERE vessel.owner_email = vessel_email
AND vessel.mmsi = vessel_mmsi
AND LOWER(vessel.name) = LOWER(vessel_name);
if vessel_rec is null then
RAISE WARNING 'Register new vessel name:[%] mmsi:[%] for [%]', vessel_name, vessel_mmsi, vessel_email;
INSERT INTO auth.vessels (owner_email, mmsi, name, role)
VALUES (vessel_email, vessel_mmsi, vessel_name, 'vessel_role');
vessel_rec.role := 'vessel_role';
vessel_rec.owner_email = vessel_email;
vessel_rec.mmsi = vessel_mmsi;
end if;
-- Get app_jwt_secret
SELECT value INTO app_jwt_secret
FROM app_settings
WHERE name = 'app.jwt_secret';
select jwt.sign(
row_to_json(r)::json, app_jwt_secret
) as token
from (
select vessel_rec.role as role,
vessel_rec.owner_email as email,
vessel_rec.mmsi as mmsi
) r
into result;
return result;
end;
$$ language plpgsql security definer;

View File

@@ -0,0 +1,156 @@
---------------------------------------------------------------------------
-- singalk db permissions
--
-- List current database
select current_database();
-- connect to the DB
\c signalk
---------------------------------------------------------------------------
-- Permissions roles
-- Users Sharing Role
-- https://postgrest.org/en/stable/auth.html#web-users-sharing-role
--
-- api_anonymous
-- nologin
-- api_anonymous role in the database with which to execute anonymous web requests.
-- api_anonymous allows JWT token generation with an expiration time via function api.login() from auth.accounts table
create role api_anonymous nologin noinherit;
grant usage on schema api to api_anonymous;
-- explicitly limit EXECUTE privileges to only signup and login functions
grant execute on function api.login(text,text) to api_anonymous;
grant execute on function api.signup(text,text,text,text) to api_anonymous;
-- explicitly limit EXECUTE privileges to pgrest db-pre-request function
grant execute on function public.check_jwt() to api_anonymous;
-- authenticator
-- login role
create role authenticator noinherit login password 'mysecretpassword';
grant api_anonymous to authenticator;
-- Grafana user and role with login, read-only
CREATE ROLE grafana WITH LOGIN PASSWORD 'mysecretpassword';
GRANT USAGE ON SCHEMA api TO grafana;
GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO grafana;
GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata TO grafana;
-- User:
-- nologin
-- read-only for all and Read-Write on logbook, stays and moorage except for name COLUMN ?
CREATE ROLE user_role WITH NOLOGIN;
GRANT user_role to authenticator;
GRANT USAGE ON SCHEMA api TO user_role;
GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO user_role;
GRANT SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata TO user_role;
-- Allow update on table for notes
GRANT UPDATE ON TABLE api.logbook,api.moorages,api.stays TO user_role;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO user_role;
-- explicitly limit EXECUTE privileges to pgrest db-pre-request function
GRANT EXECUTE ON FUNCTION public.check_jwt() to user_role;
-- Allow read on VIEWS
GRANT SELECT ON TABLE api.logs_view,api.moorages_view,api.stays_view TO user_role;
-- Vessel:
-- nologin
-- insert-update-only for api.metrics,api.logbook,api.moorages,api.stays,api.metadata and sequences and process_queue
CREATE ROLE vessel_role WITH NOLOGIN;
GRANT vessel_role to authenticator;
GRANT USAGE ON SCHEMA api TO vessel_role;
GRANT INSERT, UPDATE, SELECT ON TABLE api.metrics,api.logbook,api.moorages,api.stays,api.metadata TO vessel_role;
GRANT USAGE, SELECT ON SEQUENCE api.logbook_id_seq,api.metadata_id_seq,api.moorages_id_seq,api.stays_id_seq TO vessel_role;
GRANT INSERT ON TABLE public.process_queue TO vessel_role;
GRANT USAGE, SELECT ON SEQUENCE public.process_queue_id_seq TO vessel_role;
-- explicitly limit EXECUTE privileges to pgrest db-pre-request function
GRANT EXECUTE ON FUNCTION public.check_jwt() to vessel_role;
-- TODO: currently cron function are run as super user, switch to scheduler role.
-- Scheduler read-only all, and write on logbook, stays, moorage, process_queue
-- Crons
CREATE ROLE scheduler WITH NOLOGIN;
GRANT scheduler to authenticator;
GRANT EXECUTE ON FUNCTION api.run_cron_jobs() to scheduler;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO scheduler;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO scheduler;
GRANT SELECT,UPDATE ON TABLE process_queue TO scheduler;
GRANT USAGE ON SCHEMA auth TO scheduler;
GRANT SELECT ON ALL TABLES IN SCHEMA auth TO scheduler;
---------------------------------------------------------------------------
-- Security policy
-- ROW LEVEL Security policy
ALTER TABLE api.metadata ENABLE ROW LEVEL SECURITY;
-- Administrator can see all rows and add any rows
CREATE POLICY admin_all ON api.metadata TO current_user
USING (true)
WITH CHECK (true);
-- Allow vessel_role to insert and select on their own records
CREATE POLICY api_vessel_role ON api.metadata TO vessel_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (true);
-- Allow user_role to update and select on their own records
CREATE POLICY api_user_role ON api.metadata TO user_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%');
ALTER TABLE api.metrics ENABLE ROW LEVEL SECURITY;
-- Administrator can see all rows and add any rows
CREATE POLICY admin_all ON api.metrics TO current_user
USING (true)
WITH CHECK (true);
-- Allow vessel_role to insert and select on their own records
CREATE POLICY api_vessel_role ON api.metrics TO vessel_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (true);
-- Allow user_role to update and select on their own records
CREATE POLICY api_user_role ON api.metrics TO user_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%');
-- Be sure to enable row level security on the table
ALTER TABLE api.logbook ENABLE ROW LEVEL SECURITY;
-- Create policies
-- Administrator can see all rows and add any rows
CREATE POLICY admin_all ON api.logbook TO current_user
USING (true)
WITH CHECK (true);
-- Allow vessel_role to insert and select on their own records
CREATE POLICY api_vessel_role ON api.logbook TO vessel_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (true);
-- Allow user_role to update and select on their own records
CREATE POLICY api_user_role ON api.logbook TO user_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%');
-- Be sure to enable row level security on the table
ALTER TABLE api.stays ENABLE ROW LEVEL SECURITY;
-- Administrator can see all rows and add any rows
CREATE POLICY admin_all ON api.stays TO current_user
USING (true)
WITH CHECK (true);
-- Allow vessel_role to insert and select on their own records
CREATE POLICY api_vessel_role ON api.stays TO vessel_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (true);
-- Allow user_role to update and select on their own records
CREATE POLICY api_user_role ON api.stays TO user_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%');
-- Be sure to enable row level security on the table
ALTER TABLE api.moorages ENABLE ROW LEVEL SECURITY;
-- Administrator can see all rows and add any rows
CREATE POLICY admin_all ON api.moorages TO current_user
USING (true)
WITH CHECK (true);
-- Allow vessel_role to insert and select on their own records
CREATE POLICY api_vessel_role ON api.moorages TO vessel_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (true);
-- Allow user_role to update and select on their own records
CREATE POLICY api_user_role ON api.moorages TO user_role
USING (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%')
WITH CHECK (client_id LIKE '%' || current_setting('vessel.mmsi', false) || '%');

108
initdb/03pgjwt.sql Normal file
View File

@@ -0,0 +1,108 @@
---------------------------------------------------------------------------
--
--\echo Use "CREATE EXTENSION pgjwt" to load this file. \quit
-- JW implemtation for postgres
-- https://github.com/michelp/pgjwt
--
-- List current database
select current_database();
-- connext to the DB
\c signalk
CREATE SCHEMA IF NOT EXISTS jwt;
COMMENT ON SCHEMA jwt IS 'jwt auth postgrest';
CREATE EXTENSION IF NOT EXISTS pgcrypto; -- provides cryptographic functions for PostgreSQL like HMAC
-- pgjwt--0.1.0--0.1.1.sql
--
CREATE OR REPLACE FUNCTION jwt.url_encode(data bytea) RETURNS text LANGUAGE sql AS $$
SELECT translate(encode(data, 'base64'), E'+/=\n', '-_');
$$ IMMUTABLE;
CREATE OR REPLACE FUNCTION jwt.url_decode(data text) RETURNS bytea LANGUAGE sql AS $$
WITH t AS (SELECT translate(data, '-_', '+/') AS trans),
rem AS (SELECT length(t.trans) % 4 AS remainder FROM t) -- compute padding size
SELECT decode(
t.trans ||
CASE WHEN rem.remainder > 0
THEN repeat('=', (4 - rem.remainder))
ELSE '' END,
'base64') FROM t, rem;
$$ IMMUTABLE;
CREATE OR REPLACE FUNCTION jwt.algorithm_sign(signables text, secret text, algorithm text)
RETURNS text LANGUAGE sql AS $$
WITH
alg AS (
SELECT CASE
WHEN algorithm = 'HS256' THEN 'sha256'
WHEN algorithm = 'HS384' THEN 'sha384'
WHEN algorithm = 'HS512' THEN 'sha512'
ELSE '' END AS id) -- hmac throws error
SELECT jwt.url_encode(public.hmac(signables, secret, alg.id)) FROM alg;
$$ IMMUTABLE;
CREATE OR REPLACE FUNCTION jwt.sign(payload json, secret text, algorithm text DEFAULT 'HS256')
RETURNS text LANGUAGE sql AS $$
WITH
header AS (
SELECT jwt.url_encode(convert_to('{"alg":"' || algorithm || '","typ":"JWT"}', 'utf8')) AS data
),
payload AS (
SELECT jwt.url_encode(convert_to(payload::text, 'utf8')) AS data
),
signables AS (
SELECT header.data || '.' || payload.data AS data FROM header, payload
)
SELECT
signables.data || '.' ||
jwt.algorithm_sign(signables.data, secret, algorithm) FROM signables;
$$ IMMUTABLE;
CREATE OR REPLACE FUNCTION jwt.verify(token text, secret text, algorithm text DEFAULT 'HS256')
RETURNS table(header json, payload json, valid boolean) LANGUAGE sql AS $$
SELECT
convert_from(jwt.url_decode(r[1]), 'utf8')::json AS header,
convert_from(jwt.url_decode(r[2]), 'utf8')::json AS payload,
r[3] = jwt.algorithm_sign(r[1] || '.' || r[2], secret, algorithm) AS valid
FROM regexp_split_to_array(token, '\.') r;
$$ IMMUTABLE;
-- pgjwt--0.1.1--0.2.0.sql
CREATE OR REPLACE FUNCTION jwt.try_cast_double(inp text)
RETURNS double precision AS $$
BEGIN
BEGIN
RETURN inp::double precision;
EXCEPTION
WHEN OTHERS THEN RETURN NULL;
END;
END;
$$ language plpgsql IMMUTABLE;
CREATE OR REPLACE FUNCTION jwt.verify(token text, secret text, algorithm text DEFAULT 'HS256')
RETURNS table(header json, payload json, valid boolean) LANGUAGE sql AS $$
SELECT
jwt.header AS header,
jwt.payload AS payload,
jwt.signature_ok AND tstzrange(
to_timestamp(jwt.try_cast_double(jwt.payload->>'nbf')),
to_timestamp(jwt.try_cast_double(jwt.payload->>'exp'))
) @> CURRENT_TIMESTAMP AS valid
FROM (
SELECT
convert_from(jwt.url_decode(r[1]), 'utf8')::json AS header,
convert_from(jwt.url_decode(r[2]), 'utf8')::json AS payload,
r[3] = jwt.algorithm_sign(r[1] || '.' || r[2], secret, algorithm) AS signature_ok
FROM regexp_split_to_array(token, '\.') r
) jwt
$$ IMMUTABLE;

58
initdb/04pgcron.sql Normal file
View File

@@ -0,0 +1,58 @@
---------------------------------------------------------------------------
-- pg_cron async job
--
--CREATE DATABASE cron_database;
--CREATE SCHEMA IF NOT EXISTS cron;
--\c cron_database
\c postgres
CREATE EXTENSION IF NOT EXISTS pg_cron; -- provides a simple cron-based job scheduler for PostgreSQL
-- TRUNCATE table jobs
TRUNCATE TABLE cron.job CONTINUE IDENTITY RESTRICT;
-- Create a every 5 minutes or minute job cron_process_new_logbook_fn ??
SELECT cron.schedule('cron_new_logbook', '*/5 * * * *', 'select public.cron_process_new_logbook_fn()') ;
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_logbook';
-- Create a every 5 minute job cron_process_new_stay_fn
SELECT cron.schedule('cron_new_stay', '*/5 * * * *', 'select public.cron_process_new_stay_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_stay';
-- Create a every 6 minute job cron_process_new_moorage_fn, delay from stay to give time to generate geo reverse location, eg: name
SELECT cron.schedule('cron_new_moorage', '*/6 * * * *', 'select public.cron_process_new_moorage_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_moorage';
-- Create a every 10 minute job cron_process_monitor_offline_fn
SELECT cron.schedule('cron_monitor_offline', '*/10 * * * *', 'select public.cron_process_monitor_offline_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_monitor_offline';
-- Create a every 10 minute job cron_process_monitor_online_fn
SELECT cron.schedule('cron_monitor_online', '*/10 * * * *', 'select public.cron_process_monitor_online_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_monitor_online';
-- Create a every 5 minute job cron_process_new_account_fn
SELECT cron.schedule('cron_new_account', '*/5 * * * *', 'select public.cron_process_new_account_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_account';
-- Create a every 5 minute job cron_process_new_vessel_fn
SELECT cron.schedule('cron_new_vessel', '*/5 * * * *', 'select public.cron_process_new_vessel_fn()');
--UPDATE cron.job SET database = 'signalk' where jobname = 'cron_new_vessel';
-- Maintenance
-- Vacuum database at “At 01:01 on Sunday.”
SELECT cron.schedule('cron_vacumm', '1 1 * * 0', 'select public.cron_vaccum_fn()');
-- Any other maintenance require?
UPDATE cron.job SET database = 'signalk';
UPDATE cron.job SET nodename = '/var/run/postgresql/'; -- VS default localhost ??
-- check job lists
SELECT * FROM cron.job;
-- unschedule by job id
--SELECT cron.unschedule(1);
-- unschedule by job name
--SELECT cron.unschedule('cron_new_logbook');
-- TRUNCATE TABLE cron.job_run_details
TRUNCATE TABLE cron.job_run_details CONTINUE IDENTITY RESTRICT;
-- check job log
select * from cron.job_run_details ORDER BY end_time DESC LIMIT 10;

25
initdb/99env.sh Executable file
View File

@@ -0,0 +1,25 @@
#------------------------------------------------------------------------------
# ENV Settings
#------------------------------------------------------------------------------
echo "Set password and settings from environment variables"
PGSAIL_VERSION=`cat /docker-entrypoint-initdb.d/PGSAIL_VERSION`
psql -U ${POSTGRES_USER} signalk <<-END
-- Application settings default
INSERT INTO app_settings (name, value) VALUES
('app.jwt_secret', '${PGRST_JWT_SECRET}'),
('app.email_server', '${PGSAIL_EMAIL_SERVER}'),
('app.email_user', '${PGSAIL_EMAIL_USER}'),
('app.email_pass', '${PGSAIL_EMAIL_PASS}'),
('app.email_from', '${PGSAIL_EMAIL_FROM}'),
('app.pushover_token', '${PGSAIL_PUSHOVER_TOKEN}'),
('app.pushover_app', '_todo_'),
('app.version', '${PGSAIL_VERSION}');
-- Update comment with version
COMMENT ON DATABASE signalk IS 'version ${PGSAIL_VERSION}';
-- Update password from env
ALTER ROLE authenticator WITH PASSWORD '${PGSAIL_AUTHENTICATOR_PASSWORD}';
ALTER ROLE grafana WITH PASSWORD '${PGSAIL_GRAFANA_PASSWORD}';
END

1
initdb/PGSAIL_VERSION Normal file
View File

@@ -0,0 +1 @@
0.0.4