Note: The provided custom deserializer JAR is designed specifically for writing a single JSON message from a Kafka topic to a single column in target table.



Steps: 

  1. Begin by downloading the deserializer JAR file provided on this article.
  2. On the Infoworks UI, go to Admin > Extensions > Ingestion Extensions, and then select 'Add New Extension.'
  3. Choose 'Streaming Extension' as the Extension Type, assign a name to the extension, and upload the downloaded JAR file.
  4. Configure the Alias as 'JsonStringSingleColumnDeserializer' and the ClassName as 'confluent.JsonStringSingleColumnDeserializer.' Save the configuration. 
  5. Navigate to the Confluent Kafka Source, access the source setup page, and open the dropdown for Ingestion Extension. Choose the extension configured in Step 1 and save the settings.
  6. Set up Topic Mapping by selecting the desired topic and the custom deserializer.
  7. Once Topic Mapping is configured, click on 'Crawl Schema.' After crawling the schema, select the root struct, click 'Create Table,' and configure the table.
  8. The target table will contain a column named 'newColumn' with a datatype of 'string,' storing the entire message from a single Kafka message/offset. To change the column name, select the table > configuration > table schema > Upload Schema, and edit the column name to your preference.
  9. Optionally, you can remove Infoworks Audit columns.
  10. The target datatype for the single column on Snowflake will be Varchar. To write to a table with a single column and data type 'variant,' ensure that the target table on Snowflake is created, and choose the 'existing table' option on Infoworks.