Domain Integration
ClickHouse clusters by default have security restrictions preventing egress traffic but can be integrated with public domains to enable access.
Once a domain is integrated, the cluster will be able to access specific domains from the cluster, or use the ClickHouse table engine functions where access path is domain-based (e.g., URL, AzureBlobStorage, AzureQueue, S3, S3Queue, etc) to read and write data to that domain. Examples for both scenarios are provided later on the page.
Clusters on the Netapp Instaclustr managed platform are secured through egress firewall rules to protect against data exfiltration. Integrating with Domains adds a whitelist rule to the firewall enabling access. Consider the security risk before enabling a Domain integration.
How To Enable
The following steps explain how to integrate a ClickHouse cluster with a Domain.
- First select the “Integrations” option in console. The page will show existing integrations.
- Select “Add New Integration” to configure a new
- For type select “Domain” then specify the domain to integrate with.
- Finally press “Add” to configure the integration.
- The Integrations table now shows the newly configured integration. An integration can be deleted by pressing the “Delete” button, disabling access to the region.
Once domain integration is enabled, you would be able to use certain domain-based ClickHouse table engines. Below are a few examples.
How To Use ClickHouse URL Table Engine
ClickHouse’s URL table engine provide robust mechanisms for working with large datasets stored on the web. By leveraging these engines, you can efficiently manage and query your data directly from ClickHouse. Brief examples regarding usage are included below.
For detailed information, refer to the official documentation:
URL Table Engine
The URL table engine allows you to create tables that read from and write to online data, in a range of formats.
Creating an S3 Table
To create a table using the S3 engine, you need to specify the URL and the format of the data. Here is an example:
1 2 |
CREATE TABLE url_table ( id UInt32, name String )\ ENGINE = URL('https://public-data.com/file.csv', 'CSV'); |
Loading Data
Load data into the table by inserting data directly:
1 |
INSERT INTO url_table VALUES (1, 'Alice'), (2, 'Bob'); |
Querying Data
Query data from the URL table as you would with any other table:
1 |
SELECT * FROM url_table; |
AzureBlobStorage Table Engine
The AzureBlobStorage table engine provides an integration with Azure Blob Storage ecosystem, allowing you to create tables that read from and write to Azure Blob storage account data, in a range of formats.
Creating an AzureBlobStorage Table
To create a table using the AzureBlobStorage engine, you need to specify the storage account endpoint, the Shared Access Signatures (SAS), and the format of the data. Here is an example from the ClickHouse GitHub documentation:
1 2 3 |
CREATE TABLE azure_blob_table (key UInt64, data String) ENGINE = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', 'testcontainer', 'test_table', 'CSV'); |
Loading Data
Load data into the table by inserting data directly:
1 |
INSERT INTO test_table VALUES (1, 'a'), (2, 'b'), (3, 'c'); |
Querying Data
Query data from the AzureBlobStorage table as you would with any other table:
1 |
SELECT * FROM test_table; |
AzureQueue Table Engine
The AzureQueue table engine provides an integration with the Azure Blob Storage ecosystem, allowing streaming data import.
Creating an AzureQueue Table
Similar to creating an AzureBlobStorage table, an AzureQueue table could be created as follows (examples taken from the ClickHouse GitHub documentation):
1 2 3 4 |
CREATE TABLE azure_queue_table ( key UInt64, data String ) ENGINE = AzureQueue('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', 'testcontainer', '*', 'CSV') SETTINGS mode = 'unordered'; |
As an alternative to using account key for access, you could format a connection string as follows using the SAS token generated from the storage account level with desired permissions:
1 2 3 4 |
CREATE TABLE azure_queue_table(`id` UInt64, `name` String,`value` UInt64) ENGINE = AzureQueue('BlobEndpoint=<protocol: http/https>://<blob-domain>;SharedAccessSignature=<SAS-token>', '<container-name>', '*.csv', 'CSV') SETTINGS mode = 'unordered'; |
Unlike AzureBlobStorage table engine though, the AzureQueue table engine is used for streaming data, therefore SELECT
queries are not particularly useful as all files will only be read once. It is more practical to create real-time threads using materiralized views as follows:
1 2 3 4 |
CREATE TABLE azure_queue_engine_table (key UInt64, data String) ENGINE=AzureQueue('', 'CSV', 'gzip') SETTINGS mode = 'unordered'; CREATE TABLE stats (key UInt64, data String) ENGINE = MergeTree() ORDER BY key; CREATE MATERIALIZED VIEW consumer TO stats AS SELECT key, data FROM azure_queue_engine_table; SELECT * FROM stats ORDER BY key; |