Graphistry v2.36.6 is primarily for RAPIDS 0.18 support and bugfixes around the CSV uploader and /pivot investigation tool.
- Shared dask storage
- RAPIDS 0.18
- Dask-cuda-worker health check
- RMM_MAXIMUM_POOL_SIZE control
- Backups: Scripts for database export/import
- Data: Datetime inference on strings, community coloring determinism, CSV pivot format handling
- Investigations: Auth, sharing, and handling of deleted templates
|Python PyGraphistry client||0.16.2 -> 0.17.2|
|CUDA (In-Docker)||10.2 + 11.0|
|Elasticsearch node driver||14.2.2|
|Neo4j node driver||4.1.1|
|RAPIDS||0.17 -> 0.18|
|Splunk node SDK||1.9.0|
- File Uploader V2
- Azure GovCloud
- Shared dask storage: Notebooks and dask code can share data via `/dask-shared`, which in turn is backed via `data/dask-shared` . Depending on configuration, you may need to explicitly set permissions on folder `data/dask-shared`.
- RAPIDS 0.17 -> 0.18
- Dask-cuda-worker healthcheck: Added local CPU precheck
- forge-etl-python container is explicitly tagged, for easier reuse of the full Python base layer
- RMM_MAXIMUM_POOL_SIZE: Optional parameter for controlling max pool size for `forge-etl-python` and `dask-cuda-worker` workers per-GPU, similar to existing `RMM_INITIAL_POOL_SIZE`. Defaults to maximum.
- copy/load-db-local: Scripts for local database manipulation. Use for:
- The postgres database container encrypted account information and file/viz metadata that is not stored in folder `data/`. The blue/green remote migration scripts already handled duplicating it, but there was no convenient script for more direct local manipulation.
- Backups: Usage - Run a cronjob that passes in a filename to `etc/scripts/copy-db-local.sh` that includes a timestamp for regular.
- Local upgrades/migrations: Manually call `etc/scripts/copy-db-local.sh` + `etc/scripts/load-db-local.sh` to export your DB from a running instance and copy it into your new instance.
- Your old local instance will keep running, such as using the 80/443 web ports, and do a live dump of backup.sql . (So it is safe across postgres versions during upgrades.)
- Your new local instance will stay off, except the new postgres container service will start and load in the data. The old local instance will keep running as usual
- When happy with testing the new service, such as when running it on an alternate port (section `Caddy` of the docker-compose.yml), you can switch which caddy gets the main public ports.
- Datetime inference: Fixed regression in automatically converting common datetime string formats in uploaded data into typed datetime values
- Deterministic community coloring: Community ID (and color) determined by: community size (descending) and highest-id community member (ascending). This should not be a visible change for most users.
- /pivot CSV loader: Fix regression on general format handling
- /pivot auth: Fix overly restrictive access by external routes
- /pivot investigation & template sharing: Investigations are visible again between staff users. (New sharing modes are being added, so how this works will be iteratively improve over the new few months.)
- /pivot deleted templates: Investigations with steps that rely on deleted templates will now load instead of preventing a page load. The step will load as without any backing info; you can inspect and delete it, and insert a valid one in its place.
No breaking changes; upgrade as usual
- Local migrations are now easier: copy data/ as usual, and now run the new [copy,load]-db-script.sh for the database service. This is now more closely mimics regular blue/green high-uptime updates used in cloud setups.