All the commands are to be run in the directory containing the sink-parent pom.xml.
The unit test runner is Maven Surefire Plugin.
To run the test execute the following command:
mvn clean testThe unit test runner is Maven Failsafe Plugin.
The integration tests need a reachable Bigtable instance. It might be either a real Cloud Bigtable or the emulator. Note that some of the tests are broken on the emulator (because it handles some requests differently than Cloud Bigtable).
To configure the Bigtable instance for tests, create either a real Cloud Bigtable instance, or an emulator instance:
It can be either created using either the WebUI form or terraform google_bigtable_instance resource, for example:
resource "google_bigtable_instance" "bigtable" {
name = "kafka-connect-bigtable-sink-test"
deletion_protection = false
cluster {
cluster_id = "kafka-connect-bigtable-sink-test-cluster"
num_nodes = 1
storage_type = "SSD"
zone = "europe-central2-a"
}
}This section is optional, you can skip it if you want to use Application Default Credentials.
Create a service account and grant it Bigtable Administrator (roles/bigtable.admin) permissions (such wide permissions are needed for table and column family auto creation).
Download its key.
Note that we provide the credentials not only to our sink, but also to Confluent's (in ConfluentCompatibilityIT).
If you want to use Application Default Credentials, configure the machine appropriately (on a workstation, log in with gcloud into an account with Bigtable Administrator permissions to the instance created in one of the previous steps).
Otherwise, you need to use service account's permissions.
- Ensure that the sink's pom.xml does not contain
BIGTABLE_EMULATOR_HOSTvariable in Failsafe'sconfigurationsection withinenvironmentVariablessubsection: - Ensure that
GOOGLE_APPLICATION_CREDENTIALSin the same subsection points to the appropriate account's key. The code below shows how to use configured Application Default Credentials on a UNIX workstation. Alternatively, you could point it to the service account key created before.
<environmentVariables>
<GOOGLE_APPLICATION_CREDENTIALS>${user.home}/.config/gcloud/application_default_credentials.json</GOOGLE_APPLICATION_CREDENTIALS>
</environmentVariables>- Replace the following TODO values with you GCP project ID and Cloud Bigtable instance ID in
BaseIT#baseConnectorProps()function:
result.put(GCP_PROJECT_ID_CONFIG, "todotodo");
result.put(BIGTABLE_INSTANCE_ID_CONFIG, "todotodo");Start the emulator using gcloud directly:
gcloud beta emulators bigtable start --host-port=127.0.0.1:8086 &or using Docker with compose plugin:
services:
bigtable:
image: google/cloud-sdk:latest
ports:
- 127.0.0.1:8086:8086
entrypoint:
- gcloud
- beta
- emulators
- bigtable
- start
- --host-port=0.0.0.0:8086docker compose up -dEnsure that the sink's pom.xml contains the following section in Failsafe's <configuration> section:
<environmentVariables>
<GOOGLE_APPLICATION_CREDENTIALS>target/test-classes/fake_service_key.json</GOOGLE_APPLICATION_CREDENTIALS>
<BIGTABLE_EMULATOR_HOST>localhost:8086</BIGTABLE_EMULATOR_HOST>
</environmentVariables>The integration tests assume that the Bigtable instance they use is empty at the start of the run. The assumption is used to skip cleaning up the tables created by the tests.
If the limit on number of tables in a single Cloud Bigtable instance starts causing problems for you, clean them up by running:
PROJECT=<FILL_YOUR_PROJECT_NAME>
INSTANCE=<FILL_YOUR_INSTANCE_NAME>
cbt -project "$PROJECT" -instance "$INSTANCE" ls | xargs -P 0 -I {} cbt -project "$PROJECT" -instance "$INSTANCE" {} mvn clean verify -DskipUnitTests