This is a Terraform provider capable of managing the following JetStream assets:
Streams
Consumers
Key-Value Buckets
Key-Value Entries
[!WARNING]
Since version 0.2.0 this provider requires NATS Server 2.11 or newer and will fail on older versions.
Installation
When using Terraform 0.13 or newer adding the provider and running terraform init will download the provider from the Terraform Registry.
Credentials
The provider supports NATS 2.0 credentials either in a file or as a Terraform Variable or Data, for local use you would set the credentials as in the examples later, but if you use something like Terraform Cloud or wish to store the credentials in a credential store you can use Terraform Variables:
variable "ngs_credential_data" {
type = string
description = "Text of the NATS credentials to use to manage JetStream"
}
provider "jetstream" {
servers = "connect.ngs.global"
credential_data = var.ngs_credential_data
}
A locked down user that can only admin JetStream but not retrieve any data stored in JetStream looks like this:
╭─────────────────────────────────────────────────────────────────────────────────╮
│ User │
├──────────────────────┬──────────────────────────────────────────────────────────┤
│ Name │ ngs_jetstream_admin │
│ User ID │ UBI3V7PXXQHJ67C4G2W2D7ICX3NK4S2RJCBUPNLWGGV6W34HYFUSD57M │
│ Issuer ID │ AARRII4JZYJB3WYECBTMU6WSST2SUZ7SN7PWNSDWQTYVPMXQHDA3WXZ7 │
│ Issued │ 2020-03-11 10:37:29 UTC │
│ Expires │ 2020-04-11 10:37:29 UTC │
+----------------------+----------------------------------------------------------+
| Pub Allow | $JS.API.CONSUMER.DELETE.*.* |
| | $JS.API.CONSUMER.CREATE.> |
| | $JS.API.CONSUMER.INFO.*.* |
| | $JS.API.CONSUMER.LIST.* |
| | $JS.API.CONSUMER.NAMES.* |
| | $JS.API.INFO |
| | $JS.API.STREAM.CREATE.* |
| | $JS.API.STREAM.DELETE.* |
| | $JS.API.STREAM.INFO.* |
| | $JS.API.STREAM.LIST |
| | $JS.API.STREAM.NAMES |
| | $JS.API.STREAM.TEMPLATE.> |
| | $JS.API.STREAM.UPDATE.* |
| Sub Allow | _INBOX.> |
| Response Permissions | Not Set |
+----------------------+----------------------------------------------------------+
| Max Msg Payload | Unlimited |
| Max Data | Unlimited |
| Max Subs | Unlimited |
| Network Src | Any |
| Time | Any |
+----------------------+----------------------------------------------------------+
Here’s a command to create this using nsc, note replace the DemoAccount and 1M strings with your account name and desired expiry time:
servers - The list of servers to connect to in a comma seperated list.
credentials - (optional) Fully Qualified Path to a file holding NATS credentials.
credential_data - (optional) The NATS credentials as a string, intended to use with data providers.
user - (optional) Connects using a username, when no password is set this is assumed to be a Token.
password - (optional) Connects using a password.
nkey - (optional) Connects using an nkey stored in a file.
tls.ca_file - (optional) Fully Qualified Path to a file containing Root CA (PEM format). Use when the server has certs signed by an unknown authority.
tls.ca_file_data - (optional) The Root CA PEM as a string, intended to use with data providers. Use when the server has certs signed by an unknown authority.
tls.cert_file - (optional) The certificate to authenticate with.
tls.cert_file_data - (optional) The certificate to authenticate with, intended to use with data providers.
tls.key_file - (optional) The private key to authenticate with.
tls.key_file_data - (optional) The private key to authenticate with, intended to use with data providers.
jetstream_stream
Creates a JetStream Stream, supports editing resources in place.
description - (optional) Contains additional information about this stream (string)
metadata - (optional) A map of strings with arbitrary metadata for the stream
ack - (optional) If the Stream should support confirming receiving messages via acknowledgements (bool)
discard - (optional) When a Stream reach it’s limits either old messages are deleted or new ones are denied (new or old)
discard_new_per_subject - (optional) When discard policy is new and the stream is one with max messages per subject set, this will apply the new behavior to every subject. Essentially turning discard new from maximum number of subjects into maximum number of messages in a subject (bool)
max_age - (optional) The maximum oldest message that can be kept in the stream, duration specified in seconds (number)
max_bytes - (optional) The maximum size of all messages that can be kept in the stream (number)
compression - (optional) Enable stream compression by setting the value to s2
max_consumers - (optional) Number of consumers this stream allows (number)
max_msg_size - (optional) The maximum individual message size that the stream will accept (number)
max_msgs - (optional) The maximum amount of messages that can be kept in the stream (number)
max_msgs_per_subject (optional) The maximum amount of messages that can be kept in the stream on a per-subject basis (number)
name - The name of the stream (string)
replicas - (optional) How many replicas of the data to keep in a clustered environment (number)
retention - (optional) The retention policy to apply over and above max_msgs, max_bytes and max_age (string). Options are limits, interest and workqueue. Defaults to limits.
storage - (optional) The storage engine to use to back the stream (string)
subjects - The list of subjects that will be consumed by the Stream ([“list”, “string”])
duplicate_window - (optional) The time window size for duplicate tracking, duration specified in seconds (number)
placement_cluster - (optional) Place the stream in a specific cluster, influenced by placement_tags
placement_tags - (optional) Place the stream only on servers with these tags
source - (optional) List of streams to source
mirror - (optional) Stream to mirror
deny_delete - (optional) Restricts the ability to delete messages from a stream via the API. Cannot be changed once set to true (bool)
deny_purge - (optional) Restricts the ability to purge messages from a stream via the API. Cannot be change once set to true (bool)
allow_rollup_hdrs - (optional) Allows the use of the Nats-Rollup header to replace all contents of a stream, or subject in a stream, with a single new message (bool)
allow_direct - (optional) Allow higher performance, direct access to get individual messages via the $JS.DS.GET API (bool)
subject_transform - (optional) A map of source and destination subjects to transform.
republish_source - (optional) Republish matching messages to republish_destination
republish_destination - (optional) The destination to publish messages to
republish_headers_only - (optional) Republish only message headers, no bodies
inactive_threshold - (optional) Removes the consumer after a idle period, specified as a duration in seconds
max_ack_pending - (optional) Maximum pending Acks before consumers are paused
allow_msg_ttl - (optional) Enables Per Message TTLs
subject_delete_marker_ttl - (optional) Enables placing markers when Max Age removes messages, duration specified in seconds. This field requires allow_rollup_hdrs to be set to true. (number)
mirror_direct - (optional) If true, and the stream is a mirror, the mirror will participate in a serving direct get requests for individual messages from origin stream
allow_msg_counter - (optional) Enables distributed counter mode for the stream. This field can only be set if retention is set to limits, discard is not new, allow_msg_ttl is false and the stream is not a mirror.
description - (optional) Contains additional information about this consumer
metadata - (optional) A map of strings with arbitrary metadata for the consumer
discard - (optional) When a Stream reach it’s limits either old messages are deleted or new ones are denied (new or old)
discard_new_per_subject - (optional) When discard policy is new and the stream is one with max messages per subject set, this will apply the new behavior to every subject. Essentially turning discard new from maximum number of subjects into maximum number of messages in a subject (bool)
ack_policy - (optional) The delivery acknowledgement policy to apply to the Consumer
ack_wait - (optional) Number of seconds to wait for acknowledgement
deliver_all - (optional) Starts at the first available message in the Stream
deliver_last - (optional) Starts at the latest available message in the Stream
delivery_subject - (optional) The subject where a Push-based consumer will deliver messages
delivery_group - (optional) When set Push consumers will only deliver messages to subscriptions with this group set
durable_name - The durable name of the Consumer
filter_subject - (optional) Only receive a subset of messages from the Stream based on the subject they entered the Stream on
filter_subjects - (optional) Only receive a subset tof messages from the Stream based on subjects they entered the Stream on. This is exclusive to filter_subject. Only works with v2.10 or better.
max_delivery - (optional) Maximum deliveries to attempt for each message
replay_policy - (optional) The rate at which messages will be replayed from the stream
sample_freq - (optional) The percentage of acknowledgements that will be sampled for observability purposes
start_time - (optional) The timestamp of the first message that will be delivered by this Consumer
stream_id - The name of the Stream that this consumer consumes
stream_sequence - (optional) The Stream Sequence that will be the first message delivered by this Consumer
ratelimit - (optional) The rate limit for delivering messages to push consumers, expressed in bits per second
heartbeat - (optional) Enable heartbeat messages for push consumers, duration specified in seconds
flow_control - (optional) Enable flow control for push consumers
max_waiting - (optional) The number of pulls that can be outstanding on a pull consumer, pulls received after this is reached are ignored
headers_only - (optional) When true no message bodies will be delivered only headers
max_batch - (optional) Limits Pull Batch sizes to this maximum
max_bytes - (optional)The maximum bytes value that maybe set when dong a pull on a Pull Consumer
max_expires - (optional) Limits the Pull Expires duration to this maximum in seconds
inactive_threshold - (optional) Removes the consumer after a idle period, specified as a duration in seconds
replicas - (optional) How many replicas of the data to keep in a clustered environment
memory - (optional) Force the consumer state to be kept in memory rather than inherit the setting from the stream
backoff - (optional) List of durations in seconds that represents a retry time scale for NaK’d messages.
republish_source - (optional) Republish matching messages to republish_destination
republish_destination - (optional) The destination to publish messages to
republish_headers_only - (optional) Republish only message headers, no bodies
priority_policy - (optional) The priority policy the consumer is set to. Valid options are none, overflow, pinned_client and prioritized
priority_groups - (optional) List of priority groups this consumer supports
priority_timeout - (optional) For pinned_client priority policy how long before the client times out
jetstream_kv_bucket
Creates a JetStream based KV bucket
Example
resource "jetstream_kv_bucket" "test" {
name = "TEST"
ttl = 60
history = 10
max_value_size = 1024
max_bucket_size = 10240
}
Attribute Reference
name - (required) The unique name of the KV bucket, must match \A[a-zA-Z0-9_-]+\z
description - (optional) Contains additional information about this bucket
storage - (optional) Storage backend to use, defaults to file, can be file or memory
history - (optional) Number of historic values to keep
ttl - (optional) How many seconds to keep values for, keeps forever when not set
placement_cluster - (optional) Place the bucket in a specific cluster, influenced by placement_tags
placement_tags - (optional) Place the bucket only on servers with these tags
max_value_size - (optional) Maximum size of any value
max_bucket_size - (optional) The maximum size of all data in the bucket
replicas - (optional) How many replicas to keep on a JetStream cluster
limit_marker_ttl - (optional) Enables Per-Key TTLs and Limit Markers, specified in seconds
This is a Terraform provider capable of managing the following JetStream assets:
Installation
When using Terraform 0.13 or newer adding the
providerand runningterraform initwill download the provider from the Terraform Registry.Credentials
The provider supports NATS 2.0 credentials either in a file or as a Terraform Variable or Data, for local use you would set the credentials as in the examples later, but if you use something like Terraform Cloud or wish to store the credentials in a credential store you can use Terraform Variables:
A locked down user that can only admin JetStream but not retrieve any data stored in JetStream looks like this:
Here’s a command to create this using
nsc, note replace theDemoAccountand1Mstrings with your account name and desired expiry time:This will create the credential in
~/.nkeys/creds/synadia/MyAccount/ngs_jetstream_admin.credsProvider
Terraform Provider that connects to any NATS JetStream server
Example
Argument Reference
servers- The list of servers to connect to in a comma seperated list.credentials- (optional) Fully Qualified Path to a file holding NATS credentials.credential_data- (optional) The NATS credentials as a string, intended to use with data providers.user- (optional) Connects using a username, when no password is set this is assumed to be a Token.password- (optional) Connects using a password.nkey- (optional) Connects using an nkey stored in a file.tls.ca_file- (optional) Fully Qualified Path to a file containing Root CA (PEM format). Use when the server has certs signed by an unknown authority.tls.ca_file_data- (optional) The Root CA PEM as a string, intended to use with data providers. Use when the server has certs signed by an unknown authority.tls.cert_file- (optional) The certificate to authenticate with.tls.cert_file_data- (optional) The certificate to authenticate with, intended to use with data providers.tls.key_file- (optional) The private key to authenticate with.tls.key_file_data- (optional) The private key to authenticate with, intended to use with data providers.jetstream_stream
Creates a JetStream Stream, supports editing resources in place.
Example
Attribute Reference
description- (optional) Contains additional information about this stream (string)metadata- (optional) A map of strings with arbitrary metadata for the streamack- (optional) If the Stream should support confirming receiving messages via acknowledgements (bool)discard- (optional) When a Stream reach it’s limits either old messages are deleted or new ones are denied (neworold)discard_new_per_subject- (optional) When discard policy is new and the stream is one with max messages per subject set, this will apply the new behavior to every subject. Essentially turning discard new from maximum number of subjects into maximum number of messages in a subject (bool)max_age- (optional) The maximum oldest message that can be kept in the stream, duration specified in seconds (number)max_bytes- (optional) The maximum size of all messages that can be kept in the stream (number)compression- (optional) Enable stream compression by setting the value tos2max_consumers- (optional) Number of consumers this stream allows (number)max_msg_size- (optional) The maximum individual message size that the stream will accept (number)max_msgs- (optional) The maximum amount of messages that can be kept in the stream (number)max_msgs_per_subject(optional) The maximum amount of messages that can be kept in the stream on a per-subject basis (number)name- The name of the stream (string)replicas- (optional) How many replicas of the data to keep in a clustered environment (number)retention- (optional) The retention policy to apply over and above max_msgs, max_bytes and max_age (string). Options arelimits,interestandworkqueue. Defaults tolimits.storage- (optional) The storage engine to use to back the stream (string)subjects- The list of subjects that will be consumed by the Stream ([“list”, “string”])duplicate_window- (optional) The time window size for duplicate tracking, duration specified in seconds (number)placement_cluster- (optional) Place the stream in a specific cluster, influenced by placement_tagsplacement_tags- (optional) Place the stream only on servers with these tagssource- (optional) List of streams to sourcemirror- (optional) Stream to mirrordeny_delete- (optional) Restricts the ability to delete messages from a stream via the API. Cannot be changed once set to true (bool)deny_purge- (optional) Restricts the ability to purge messages from a stream via the API. Cannot be change once set to true (bool)allow_rollup_hdrs- (optional) Allows the use of the Nats-Rollup header to replace all contents of a stream, or subject in a stream, with a single new message (bool)allow_direct- (optional) Allow higher performance, direct access to get individual messages via the $JS.DS.GET API (bool)subject_transform- (optional) A map of source and destination subjects to transform.republish_source- (optional) Republish matching messages torepublish_destinationrepublish_destination- (optional) The destination to publish messages torepublish_headers_only- (optional) Republish only message headers, no bodiesinactive_threshold- (optional) Removes the consumer after a idle period, specified as a duration in secondsmax_ack_pending- (optional) Maximum pending Acks before consumers are pausedallow_msg_ttl- (optional) Enables Per Message TTLssubject_delete_marker_ttl- (optional) Enables placing markers when Max Age removes messages, duration specified in seconds. This field requiresallow_rollup_hdrsto be set to true. (number)mirror_direct- (optional) If true, and the stream is a mirror, the mirror will participate in a serving direct get requests for individual messages from origin streamallow_msg_counter- (optional) Enables distributed counter mode for the stream. This field can only be set ifretentionis set tolimits,discardis notnew,allow_msg_ttlis false and the stream is not amirror.allow_atomic- (optional) Enables atomic batch publishesallow_msg_schedules- (optional) Allows message scheduling for delayed or recurring delivery. This field can only be set ifallow_rollup_hdrsis true.jetstream_consumer
Create or Delete Consumers on any Terraform managed Stream. Does not support editing consumers in place
Example
Attribute Reference
description- (optional) Contains additional information about this consumermetadata- (optional) A map of strings with arbitrary metadata for the consumerdiscard- (optional) When a Stream reach it’s limits either old messages are deleted or new ones are denied (neworold)discard_new_per_subject- (optional) When discard policy is new and the stream is one with max messages per subject set, this will apply the new behavior to every subject. Essentially turning discard new from maximum number of subjects into maximum number of messages in a subject (bool)ack_policy- (optional) The delivery acknowledgement policy to apply to the Consumerack_wait- (optional) Number of seconds to wait for acknowledgementdeliver_all- (optional) Starts at the first available message in the Streamdeliver_last- (optional) Starts at the latest available message in the Streamdelivery_subject- (optional) The subject where a Push-based consumer will deliver messagesdelivery_group- (optional) When set Push consumers will only deliver messages to subscriptions with this group setdurable_name- The durable name of the Consumerfilter_subject- (optional) Only receive a subset of messages from the Stream based on the subject they entered the Stream onfilter_subjects- (optional) Only receive a subset tof messages from the Stream based on subjects they entered the Stream on. This is exclusive tofilter_subject. Only works with v2.10 or better.max_delivery- (optional) Maximum deliveries to attempt for each messagereplay_policy- (optional) The rate at which messages will be replayed from the streamsample_freq- (optional) The percentage of acknowledgements that will be sampled for observability purposesstart_time- (optional) The timestamp of the first message that will be delivered by this Consumerstream_id- The name of the Stream that this consumer consumesstream_sequence- (optional) The Stream Sequence that will be the first message delivered by this Consumerratelimit- (optional) The rate limit for delivering messages to push consumers, expressed in bits per secondheartbeat- (optional) Enable heartbeat messages for push consumers, duration specified in secondsflow_control- (optional) Enable flow control for push consumersmax_waiting- (optional) The number of pulls that can be outstanding on a pull consumer, pulls received after this is reached are ignoredheaders_only- (optional) When true no message bodies will be delivered only headersmax_batch- (optional) Limits Pull Batch sizes to this maximummax_bytes- (optional)The maximum bytes value that maybe set when dong a pull on a Pull Consumermax_expires- (optional) Limits the Pull Expires duration to this maximum in secondsinactive_threshold- (optional) Removes the consumer after a idle period, specified as a duration in secondsreplicas- (optional) How many replicas of the data to keep in a clustered environmentmemory- (optional) Force the consumer state to be kept in memory rather than inherit the setting from the streambackoff- (optional) List of durations in seconds that represents a retry time scale for NaK’d messages.republish_source- (optional) Republish matching messages torepublish_destinationrepublish_destination- (optional) The destination to publish messages torepublish_headers_only- (optional) Republish only message headers, no bodiespriority_policy- (optional) The priority policy the consumer is set to. Valid options arenone,overflow,pinned_clientandprioritizedpriority_groups- (optional) List of priority groups this consumer supportspriority_timeout- (optional) For pinned_client priority policy how long before the client times outjetstream_kv_bucket
Creates a JetStream based KV bucket
Example
Attribute Reference
name- (required) The unique name of the KV bucket, must match\A[a-zA-Z0-9_-]+\zdescription- (optional) Contains additional information about this bucketstorage- (optional) Storage backend to use, defaults tofile, can befileormemoryhistory- (optional) Number of historic values to keepttl- (optional) How many seconds to keep values for, keeps forever when not setplacement_cluster- (optional) Place the bucket in a specific cluster, influenced by placement_tagsplacement_tags- (optional) Place the bucket only on servers with these tagsmax_value_size- (optional) Maximum size of any valuemax_bucket_size- (optional) The maximum size of all data in the bucketreplicas- (optional) How many replicas to keep on a JetStream clusterlimit_marker_ttl- (optional) Enables Per-Key TTLs and Limit Markers, specified in secondsjetstream_kv_entry
Creates a JetStream based KV bucket entry
Example
Attribute Reference
bucket- (required) The name of the KV bucketkey- (required) The entry keyvalue- (required) The entry valueImport existing JetStream resources
See docs/guides/import.md