Skip to content

S3Sink token expired #929

@obbiondo

Description

@obbiondo

Issue Guidelines

What version of the Stream Reactor are you reporting this issue for?

Version 4.0.0

Are you running the correct version of Kafka/Confluent for the Stream reactor release?

Yes

Do you have a supported version of the data source/sink .i.e Cassandra 3.0.9?

Irrelevant (S3 Sink)

Have you read the docs?

Yes

What is the expected behavior?

When the messages are saved into the S3 bucket, the connector should refresh the AWS token to write into the bucket successfully, whatever its expiration interval is.

What was observed?

When no messages are sent to the topic for a certain amount of time, the task fails because it can't write to the bucket.

What is your connector properties configuration (my-connector.properties)?

    aws.auth.mode": Default
    connect.s3.aws.client: AWS
    connect.s3.http.max.retries: "5"
    connect.s3.kcql: insert into bucketName:prefix select * from topicName STOREAS `JSON` WITH_FLUSH_SIZE = 5242880 WITH_FLUSH_INTERVAL =
      3600
    connect.s3.max.retries: "5"
    connect.s3.retry.interval: "60"
    connect.s3.seek.migration.enabled: "true"
    key.converter: org.apache.kafka.connect.storage.StringConverter
    topics: topicName
    value.converter: io.confluent.connect.avro.AvroConverter
    value.converter.schema.registry.url: SCHEMA_REGISTRY_URL

Please provide full log files (redact and sensitive information)

stackTrace.log

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions