This article covers two Python scripts that allow you to bulk-export and bulk-update connection configurations in Infoworks via REST API — one for source connections and one for target data connections. Both scripts follow the same two-phase workflow: Export → Edit CSV → Apply.
Table of Contents
- Prerequisites
- Scripts
- Phase 1 – Export
- CSV Column Reference
- Phase 2 – Apply Updates
- Sub-Type Field Routing
- Status Values
- Key Rules & Gotchas
- Quick Command Reference
1. Prerequisites
Create and activate virtual env
Install required dependencies
python3 -m venv venv source venv/bin/activate pip install --upgrade pip pip install "requests==2.28.2" "urllib3==1.26.18"
Deactivate the virtual env after running the below extraction and update scripts
deactivate
Environment variables
Both scripts read credentials from environment variables. Set these in your shell before running any command.
| Variable | Description | Example |
|---|---|---|
iwx_url | Base URL of your Infoworks instance | http://54.221.97.14:3000 |
iw_token | Your Infoworks Basic token (the long Base64 string from your credentials — not the short-lived access token) | zThziQ7MoJJP… |
DRY_RUN | (Optional) Set to true to preview changes without executing them | true |
Linux:
export iwx_url="http://<your-host>:3000"
export iw_token="<your-basic-token>"The scripts automatically exchange the basic token for a short-lived access token at startup — you do not need to manage access tokens manually.
2. Scripts
| Script | Manages | API base path | Update method |
|---|---|---|---|
iw_connection_manager.py | Source connections (RDBMS + file/storage sources) | /v3/sources | PUT |
iw_target_connection_manager.py | Target data connections (Postgres, Snowflake, BigQuery, SQL Server, etc.) | /v3/admin/data-connections | PATCH |
Both scripts are standalone and share no code. Run whichever is appropriate for your use case, or both independently.
3. Phase 1 – Export
The export phase pulls all connection configurations from Infoworks and writes them to a CSV file. You then edit that CSV to specify your changes before running the apply phase.
Source connections
python iw_connection_manager.py export --output connections.csvTarget data connections
python iw_target_connection_manager.py export --output target_connections.csvNote: The --output flag is optional. If omitted, the file is named automatically with a timestamp, e.g. iw_connections_20250405_143201.csv.
What happens during export
- The script authenticates and lists all connections.
- For each connection, it fetches the full configuration via a separate API call.
- Each row is written to the CSV, including convenience columns showing current values and empty editable columns for your changes.
- If a connection fails to fetch (network error, permissions, etc.), the row is written with
apply_status = EXPORT_ERRORand will be automatically skipped in the apply phase.
Progress is printed to the console for every connection:
2025-04-05 14:32:01 [INFO] Found 28 target data connection(s).
2025-04-05 14:32:01 [INFO] Fetching connection 'supp_pg' (id=128814e1…, sub_type=postgres) …
2025-04-05 14:32:02 [INFO] OK
2025-04-05 14:32:02 [INFO] Fetching connection 'IWX_Snowflake_API' (id=61417cde…, sub_type=snowflake) …
2025-04-05 14:32:02 [INFO] OK
...4. CSV Column Reference
Source connections CSV (iw_connection_manager.py)
| Column | Editable? | Description |
|---|---|---|
source_id | No | Internal Infoworks source ID |
source_name | No | Source display name |
sub_type | No | e.g. teradata, structured, snowflake |
current_connection_url | No | Current JDBC URL (RDBMS sources) |
current_source_base_path | No | Current base path (file/storage sources) |
current_source_base_path_relative | No | Current relative path (file/storage sources) |
current_access_id | No | Current cloud access key ID (file/storage sources) |
raw_connection_payload_json | Never edit | Full GET response — used as the base for PUT. Do not modify. |
new_connection_url | Yes | New JDBC URL (RDBMS sources) |
new_source_base_path | Yes | New base path (file/storage sources) |
new_source_base_path_relative | Yes | New relative path (file/storage sources) |
new_access_id | Yes | New cloud access key ID (file/storage sources) |
new_username | Yes | New database username (RDBMS sources) |
new_password_or_secret | Yes | New password or secret key — required for any update |
apply | Yes | Set to Y to include this row in the update run |
apply_status | Written by script | Result after apply phase |
apply_message | Written by script | Detail message or error |
Target data connections CSV (iw_target_connection_manager.py)
| Column | Editable? | Description |
|---|---|---|
connection_id | No | Internal Infoworks connection ID |
connection_name | No | Connection display name |
type | No | Always TARGET |
sub_type | No | e.g. POSTGRES, SNOWFLAKE, BIGQUERY, SQL_SERVER |
current_jdbc_url | No | Current JDBC URL (Postgres, SQL Server) |
current_url | No | Current Snowflake host URL |
current_account | No | Current Snowflake account name |
current_warehouse | No | Current Snowflake warehouse |
current_project_id | No | Current BigQuery project ID |
current_username | No | Current database username |
raw_properties_json | Never edit | Full GET response — used as the base for PATCH. Do not modify. |
new_jdbc_url | Yes | New JDBC URL (Postgres, SQL Server) |
new_url | Yes | New Snowflake host URL |
new_account | Yes | New Snowflake account name |
new_warehouse | Yes | New Snowflake warehouse |
new_project_id | Yes | New BigQuery project ID |
new_username | Yes | New database username |
new_password | Yes | New password — required for any update (except BigQuery) |
apply | Yes | Set to Y to include this row in the update run |
apply_status | Written by script | Result after apply phase |
apply_message | Written by script | Detail message or error |
How to edit the CSV
- Open the exported CSV in Excel, Google Sheets, or any text editor.
- For each connection you want to update, fill in the relevant
new_*columns. Leave unused columns blank. - Set the
applycolumn toYfor every row you want to update. Rows withNor blank are skipped. - Save the file in CSV format.
Important: Passwords and secret keys are stored as plain text in this file. Treat it as a credential file — restrict access, do not commit to version control, and delete it after use.
5. Phase 2 – Apply Updates
Step 1 — Dry run (recommended)
Always run with DRY_RUN=true first. This prints the intended API URL and a sanitised payload (with passwords redacted) for every apply=Y row. No changes are made.
Source connections:
DRY_RUN=true python iw_connection_manager.py apply --input connections.csvTarget data connections:
DRY_RUN=true python iw_target_connection_manager.py apply --input target_connections.csvReview the console output carefully. Confirm the correct connections are being changed and the payload looks as expected.
Step 2 — Live run
Once you are satisfied with the dry run output, run without DRY_RUN:
Source connections:
python iw_connection_manager.py apply --input connections.csvTarget data connections:
python iw_target_connection_manager.py apply --input target_connections.csvWhat happens during apply
- The script reads every row in the CSV.
- Rows are skipped if:
applyis notY, nonew_*fields are filled in, or the row hasapply_status = EXPORT_ERROR. - For eligible rows, the script takes the
raw_*_jsoncolumn as the base payload, applies only the specified changes, and sends the update API call. - Each row result is logged to the console and written to a new timestamped output CSV (your original input file is never modified).
- At the end, a summary is printed.
Example console output
Row 1/5 – 'supp_pg' (id=128814e1…, sub_type=postgres) apply=Y
Changes: properties.jdbc_url updated; properties.password updated
UPDATED: OK
Row 2/5 – 'bq-connection' (id=37a5e87e…, sub_type=bigquery) apply=N
Skipping: apply != Y
Row 3/5 – 'IWX_Snowflake_API' (id=61417cde…, sub_type=snowflake) apply=Y
Changes: properties.url updated; properties.warehouse updated; properties.password updated
UPDATED: OK
────────────────────────────────────────────────────────────
APPLY COMPLETE
Updated : 2
Skipped : 2
Failed : 1
Status CSV written → target_connections_applied_20250405_143530.csvThe new_password / new_password_or_secret column is replaced with ***REDACTED*** in the output status CSV.
6. Sub-Type Field Routing
Source connections
| Sub-type(s) | Category | Fields applied |
|---|---|---|
structured, json, parquet | File / Storage | new_source_base_path, new_source_base_path_relative, new_access_id, new_password_or_secret |
| All others (teradata, oracle, snowflake, sqlserver, db2, etc.) | RDBMS | new_connection_url, new_username, new_password_or_secret |
Target data connections
| Sub-type(s) | Fields applied |
|---|---|
POSTGRES, SQL_SERVER, and all others not listed below | new_jdbc_url, new_username, new_password |
SNOWFLAKE | new_url, new_account, new_warehouse, new_username, new_password |
BIGQUERY | new_project_id only — BigQuery uses a service credential file for authentication; the password field does not apply |
Filling in a new_* column that does not apply to the sub-type (e.g. new_jdbc_url on a Snowflake connection) is harmless — it will be ignored.
7. Status Values
| Status | Meaning |
|---|---|
UPDATED | API call succeeded and the connection was updated. |
SKIPPED | Row was intentionally skipped: apply != Y, no new_* fields filled in, or flagged as an export error. |
FAILED | API call was attempted but returned an error. See apply_message for the HTTP status and error detail. |
DRY_RUN | Would have been updated but DRY_RUN=true was set — no change made. |
EXPORT_ERROR | The GET call failed during export. The row is automatically skipped during apply. |
8. Key Rules & Gotchas
Password is always required for any update
Infoworks returns "password": "********" in GET responses — the actual password is never returned. If the script sent this masked value back in a PUT or PATCH, the server would reject it or store asterisks as the password. Therefore:
- You must supply
new_password_or_secret(source) ornew_password(target) whenever you are making any change to a connection — even if you are only changing the URL or username. - If you omit the password, the script will log a warning and continue, but the API call may be rejected by the server.
- BigQuery target connections are the only exception — they use a service credential file and have no password field.
Never edit the raw JSON column
The raw_connection_payload_json (source) and raw_properties_json (target) columns contain the full GET response and are used as the base for the update payload. The script applies only your specified changes on top of this base — all other fields are preserved exactly. Editing this column may corrupt the payload or silently drop connector-specific configuration.
EXPORT_ERROR rows are always skipped
If a connection failed to fetch during export (e.g. network timeout, permission error), its row has apply_status = EXPORT_ERROR. Even if you set apply = Y, the script will skip it. Re-run the export once the underlying issue is resolved.
The output CSV is always a new file
The apply phase never overwrites your input CSV. A new timestamped file is always created, e.g. connections_applied_20250405_143530.csv. This file serves as your audit record.
Only changed fields are touched
The scripts never rebuild the payload from scratch. They take the full saved GET response, apply the minimum edits you specified, and send the result. All connector-specific fields (driver names, additional params, warehouse settings, etc.) are preserved verbatim.
9. Quick Command Reference
Source connections
| Action | Command |
|---|---|
| Export | |
| Dry run | |
| Live apply | |
Target data connections
| Action | Command |
|---|---|
| Export | |
| Dry run | |
| Live apply | |