This article covers two Python scripts that allow you to bulk-export and bulk-update connection configurations in Infoworks via REST API — one for source connections and one for target data connections. Both scripts follow the same two-phase workflow: Export → Edit CSV → Apply.


Table of Contents

  1. Prerequisites
  2. Scripts
  3. Phase 1 – Export
  4. CSV Column Reference
  5. Phase 2 – Apply Updates
  6. Sub-Type Field Routing
  7. Status Values
  8. Key Rules & Gotchas
  9. Quick Command Reference

1. Prerequisites


Create and activate virtual env

Install required dependencies 

python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install "requests==2.28.2" "urllib3==1.26.18"

Deactivate the virtual env after running the below extraction and update scripts 

deactivate


Environment variables


Both scripts read credentials from environment variables. Set these in your shell before running any command.

VariableDescriptionExample
iwx_urlBase URL of your Infoworks instancehttp://54.221.97.14:3000
iw_tokenYour Infoworks Basic token (the long Base64 string from your credentials — not the short-lived access token)zThziQ7MoJJP…
DRY_RUN(Optional) Set to true to preview changes without executing themtrue


Linux:

export iwx_url="http://<your-host>:3000"
export iw_token="<your-basic-token>"

The scripts automatically exchange the basic token for a short-lived access token at startup — you do not need to manage access tokens manually.


2. Scripts

ScriptManagesAPI base pathUpdate method
iw_connection_manager.pySource connections (RDBMS + file/storage sources)/v3/sourcesPUT
iw_target_connection_manager.pyTarget data connections (Postgres, Snowflake, BigQuery, SQL Server, etc.)/v3/admin/data-connectionsPATCH

Both scripts are standalone and share no code. Run whichever is appropriate for your use case, or both independently.


3. Phase 1 – Export

The export phase pulls all connection configurations from Infoworks and writes them to a CSV file. You then edit that CSV to specify your changes before running the apply phase.

Source connections

python iw_connection_manager.py export --output connections.csv

Target data connections

python iw_target_connection_manager.py export --output target_connections.csv

Note: The --output flag is optional. If omitted, the file is named automatically with a timestamp, e.g. iw_connections_20250405_143201.csv.

What happens during export

  1. The script authenticates and lists all connections.
  2. For each connection, it fetches the full configuration via a separate API call.
  3. Each row is written to the CSV, including convenience columns showing current values and empty editable columns for your changes.
  4. If a connection fails to fetch (network error, permissions, etc.), the row is written with apply_status = EXPORT_ERROR and will be automatically skipped in the apply phase.

Progress is printed to the console for every connection:

2025-04-05 14:32:01 [INFO] Found 28 target data connection(s).
2025-04-05 14:32:01 [INFO]   Fetching connection 'supp_pg' (id=128814e1…, sub_type=postgres) …
2025-04-05 14:32:02 [INFO]     OK
2025-04-05 14:32:02 [INFO]   Fetching connection 'IWX_Snowflake_API' (id=61417cde…, sub_type=snowflake) …
2025-04-05 14:32:02 [INFO]     OK
...

4. CSV Column Reference

Source connections CSV (iw_connection_manager.py)

ColumnEditable?Description
source_idNoInternal Infoworks source ID
source_nameNoSource display name
sub_typeNoe.g. teradata, structured, snowflake
current_connection_urlNoCurrent JDBC URL (RDBMS sources)
current_source_base_pathNoCurrent base path (file/storage sources)
current_source_base_path_relativeNoCurrent relative path (file/storage sources)
current_access_idNoCurrent cloud access key ID (file/storage sources)
raw_connection_payload_jsonNever editFull GET response — used as the base for PUT. Do not modify.
new_connection_urlYesNew JDBC URL (RDBMS sources)
new_source_base_pathYesNew base path (file/storage sources)
new_source_base_path_relativeYesNew relative path (file/storage sources)
new_access_idYesNew cloud access key ID (file/storage sources)
new_usernameYesNew database username (RDBMS sources)
new_password_or_secretYesNew password or secret key — required for any update
applyYesSet to Y to include this row in the update run
apply_statusWritten by scriptResult after apply phase
apply_messageWritten by scriptDetail message or error

Target data connections CSV (iw_target_connection_manager.py)

ColumnEditable?Description
connection_idNoInternal Infoworks connection ID
connection_nameNoConnection display name
typeNoAlways TARGET
sub_typeNoe.g. POSTGRES, SNOWFLAKE, BIGQUERY, SQL_SERVER
current_jdbc_urlNoCurrent JDBC URL (Postgres, SQL Server)
current_urlNoCurrent Snowflake host URL
current_accountNoCurrent Snowflake account name
current_warehouseNoCurrent Snowflake warehouse
current_project_idNoCurrent BigQuery project ID
current_usernameNoCurrent database username
raw_properties_jsonNever editFull GET response — used as the base for PATCH. Do not modify.
new_jdbc_urlYesNew JDBC URL (Postgres, SQL Server)
new_urlYesNew Snowflake host URL
new_accountYesNew Snowflake account name
new_warehouseYesNew Snowflake warehouse
new_project_idYesNew BigQuery project ID
new_usernameYesNew database username
new_passwordYesNew password — required for any update (except BigQuery)
applyYesSet to Y to include this row in the update run
apply_statusWritten by scriptResult after apply phase
apply_messageWritten by scriptDetail message or error

How to edit the CSV

  1. Open the exported CSV in Excel, Google Sheets, or any text editor.
  2. For each connection you want to update, fill in the relevant new_* columns. Leave unused columns blank.
  3. Set the apply column to Y for every row you want to update. Rows with N or blank are skipped.
  4. Save the file in CSV format.

Important: Passwords and secret keys are stored as plain text in this file. Treat it as a credential file — restrict access, do not commit to version control, and delete it after use.


5. Phase 2 – Apply Updates

Step 1 — Dry run (recommended)

Always run with DRY_RUN=true first. This prints the intended API URL and a sanitised payload (with passwords redacted) for every apply=Y row. No changes are made.

Source connections:

DRY_RUN=true python iw_connection_manager.py apply --input connections.csv

Target data connections:

DRY_RUN=true python iw_target_connection_manager.py apply --input target_connections.csv

Review the console output carefully. Confirm the correct connections are being changed and the payload looks as expected.

Step 2 — Live run

Once you are satisfied with the dry run output, run without DRY_RUN:

Source connections:

python iw_connection_manager.py apply --input connections.csv

Target data connections:

python iw_target_connection_manager.py apply --input target_connections.csv

What happens during apply

  1. The script reads every row in the CSV.
  2. Rows are skipped if: apply is not Y, no new_* fields are filled in, or the row has apply_status = EXPORT_ERROR.
  3. For eligible rows, the script takes the raw_*_json column as the base payload, applies only the specified changes, and sends the update API call.
  4. Each row result is logged to the console and written to a new timestamped output CSV (your original input file is never modified).
  5. At the end, a summary is printed.

Example console output

Row 1/5 – 'supp_pg' (id=128814e1…, sub_type=postgres) apply=Y
  Changes: properties.jdbc_url updated; properties.password updated
  UPDATED: OK
Row 2/5 – 'bq-connection' (id=37a5e87e…, sub_type=bigquery) apply=N
  Skipping: apply != Y
Row 3/5 – 'IWX_Snowflake_API' (id=61417cde…, sub_type=snowflake) apply=Y
  Changes: properties.url updated; properties.warehouse updated; properties.password updated
  UPDATED: OK
────────────────────────────────────────────────────────────
APPLY COMPLETE
  Updated : 2
  Skipped : 2
  Failed  : 1
Status CSV written → target_connections_applied_20250405_143530.csv

The new_password / new_password_or_secret column is replaced with ***REDACTED*** in the output status CSV.


6. Sub-Type Field Routing

Source connections

Sub-type(s)CategoryFields applied
structured, json, parquetFile / Storagenew_source_base_path, new_source_base_path_relative, new_access_id, new_password_or_secret
All others (teradata, oracle, snowflake, sqlserver, db2, etc.)RDBMSnew_connection_url, new_username, new_password_or_secret

Target data connections

Sub-type(s)Fields applied
POSTGRES, SQL_SERVER, and all others not listed belownew_jdbc_url, new_username, new_password
SNOWFLAKEnew_url, new_account, new_warehouse, new_username, new_password
BIGQUERYnew_project_id only — BigQuery uses a service credential file for authentication; the password field does not apply

Filling in a new_* column that does not apply to the sub-type (e.g. new_jdbc_url on a Snowflake connection) is harmless — it will be ignored.


7. Status Values

StatusMeaning
UPDATEDAPI call succeeded and the connection was updated.
SKIPPEDRow was intentionally skipped: apply != Y, no new_* fields filled in, or flagged as an export error.
FAILEDAPI call was attempted but returned an error. See apply_message for the HTTP status and error detail.
DRY_RUNWould have been updated but DRY_RUN=true was set — no change made.
EXPORT_ERRORThe GET call failed during export. The row is automatically skipped during apply.

8. Key Rules & Gotchas

Password is always required for any update

Infoworks returns "password": "********" in GET responses — the actual password is never returned. If the script sent this masked value back in a PUT or PATCH, the server would reject it or store asterisks as the password. Therefore:

  • You must supply new_password_or_secret (source) or new_password (target) whenever you are making any change to a connection — even if you are only changing the URL or username.
  • If you omit the password, the script will log a warning and continue, but the API call may be rejected by the server.
  • BigQuery target connections are the only exception — they use a service credential file and have no password field.

Never edit the raw JSON column

The raw_connection_payload_json (source) and raw_properties_json (target) columns contain the full GET response and are used as the base for the update payload. The script applies only your specified changes on top of this base — all other fields are preserved exactly. Editing this column may corrupt the payload or silently drop connector-specific configuration.

EXPORT_ERROR rows are always skipped

If a connection failed to fetch during export (e.g. network timeout, permission error), its row has apply_status = EXPORT_ERROR. Even if you set apply = Y, the script will skip it. Re-run the export once the underlying issue is resolved.

The output CSV is always a new file

The apply phase never overwrites your input CSV. A new timestamped file is always created, e.g. connections_applied_20250405_143530.csv. This file serves as your audit record.

Only changed fields are touched

The scripts never rebuild the payload from scratch. They take the full saved GET response, apply the minimum edits you specified, and send the result. All connector-specific fields (driver names, additional params, warehouse settings, etc.) are preserved verbatim.


9. Quick Command Reference

Source connections

ActionCommand
Export
python iw_connection_manager.py export --output connections.csv
Dry run
DRY_RUN=true python iw_connection_manager.py apply --input connections.csv
Live apply
python iw_connection_manager.py apply --input connections.csv

Target data connections

ActionCommand
Export
python iw_target_connection_manager.py export --output target_connections.csv
Dry run
DRY_RUN=true python iw_target_connection_manager.py apply --input target_connections.csv
Live apply
python iw_target_connection_manager.py apply --input target_connections.csv