For Arch users there is an AUR package that you can install with:
yay natscli
Nightly docker images
Nightly builds are included in the synadia/nats-server:nightly Docker images.
Configuration Contexts
The nats CLI supports multiple named configurations, for the rest of the document we’ll interact via demo.nats.io.
To enable this we’ll create a demo configuration and set it as default.
First we add a configuration to capture the default localhost configuration.
NATS Configuration Context "nats"
Description: NATS Demo
Server URLs: demo.nats.io:4222
These are the contexts, the * indicates the default
nats context ls
Output
Known contexts:
localhost Localhost
nats* NATS Demo
The context is selected as default, use nats context --help to see how to add, remove and edit contexts.
To switch to another context we can use:
nats ctx select localhost
To switch context back to previous one, we can use context previous subcommand:
nats ctx -- -
Configuration file
nats-cli stores contextes in ~/.config/nats/context. Those contextes are stored as JSON documents. You can find the description and expected value for this configuration file by running nats --help and look for the global flags.
12:33:17 Published 33 bytes to "cli.demo"
12:33:17 Published 33 bytes to "cli.demo"
12:33:17 Published 33 bytes to "cli.demo"
12:33:18 Published 33 bytes to "cli.demo"
12:33:18 Published 33 bytes to "cli.demo"
We can also publish messages read from STDIN:
echo hello|nats pub cli.demo
Output
12:34:15 Reading payload from STDIN
12:34:15 Published 6 bytes to "cli.demo"
Finally, NATS supports HTTP style headers and the CLI behaves like curl:
[#47] Received on "cli.demo"
Header1: One
Header2: Two
hello headers
match requests and replies
We can print matching replay-requests together
nats sub --match-replies cli.demo
Output
[#48] Received on "cli.demo" with reply "_INBOX.12345"
[#48] Matched reply on "_INBOX.12345"
sub –match-replies –dump subject.name
Output
X.json
X_reply.json
JetStream
When receiving messages from a JetStream Push Consumer messages can be acknowledged when received by passing --ack, the
message metadata is also produced:
nats sub js.out.testing --ack
Output
12:55:23 Subscribing on js.out.testing with acknowledgement of JetStream messages
[#1] Received JetStream message: consumer: TESTING > TAIL / subject: js.in.testing / delivered: 1 / consumer seq: 568 / stream seq: 2638 / ack: true
test JS message
Queue Groups
When subscribers join a Queue Group the messages are randomly load shared within the group. Perform the following
subscribe in 2 or more shells and then publish messages using some of the methods shown above, these messages will
only be received by one of the subscribers at a time.
nats sub cli.demo --queue=Q1
Service Requests and Creation
NATS supports a RPC mechanism where a service received Requests and replies with data in response.
nats reply 'cli.weather.>' "Weather Service"
Output
12:43:28 Listening on "cli.weather.>" in group "NATS-RPLY-22"
In another shell we can send a request to this service:
nats request "cli.weather.london" ''
Output
12:46:34 Sending request on "cli.weather.london"
12:46:35 Received on "_INBOX.BJoZpwsshQM5cKUj8KAkT6.HF9jslpP" rtt 404.76854ms
Weather Service
This shows that the service round trip was 404ms, and we can see the response Weather Service.
To make this a bit more interesting we can interact with the wttr.in web service:
Now the nats CLI parses the subject, extracts the {london,newyork} from the subjects and calls curl, replacing
{{2}} with the body of the 2nd subject token - {london,newyork}.
Translating message data using a converter command
Additional to the raw output of messages using nats sub and nats stream view you can also translate the message data by running it through a command.
The command receives the message data as raw bytes through stdin and the output of the command will be the shown output for the message. There is the additional possibility to add the filter subject by using {{Subject}} as part of the arguments for the tranlation command.
Examples for using the translation feature:
Here we use the jq tool to format our json message payload into a more readable format:
We subscribe to a subject that will receive json data.
nats sub --translate 'jq .' cli.json
Now we publish some example data.
nats pub cli.json '{"task":"demo","duration":60}'
The Output will show the message formatted.
23:54:35 Subscribing on cli.json
[#1] Received on "cli.json"
{
"task": "demo",
"duration": 60
}
Another example is creating hex dumps from any message to avoid terminal corruption.
By changing the subscription into:
nats sub --translate 'xxd' cli.json
We will get the following output for the same published msg:
00:02:56 Subscribing on cli.json
[#1] Received on "cli.json"
00000000: 7b22 7461 736b 223a 2264 656d 6f22 2c22 {"task":"demo","
00000010: 6475 7261 7469 6f6e 223a 3630 7d duration":60}
Examples for using the translation feature with template:
A somewhat artificial example using the subject as argument would be:
nats sub --translate "sed 's/\(.*\)/{{Subject}}: /'" cli.json
Output
00:22:19 Subscribing on cli.json
[#1] Received on "cli.json"
cli.json: {"task":"demo","duration":60}
The translation feature makes it possible to write specialized or universal translators to aid in debugging messages in streams or core nats.
Benchmarking and Latency Testing
Benchmarking and latency testing is key requirement for evaluating the production preparedness of your NATS network.
Benchmarking
Here we’ll run these benchmarks against a local server instead of demo.nats.io.
nats context select localhost
Output
NATS Configuration Context "localhost"
Description: Localhost
Server URLs: nats://127.0.0.1:4222
We can benchmark core NATS publishing performance, here we publish 10 million messages from 2 concurrent publishers. By default, messages are published as quick as possible without any acknowledgement or confirmations:
nats bench pub test --msgs 10000000 --clients 2 --no-progress
Run a nats bench sub instance with two concurrent subscribers on the same subject (so a fan out of 1 to 2) while publishing messages to measure the rate of messages being delivered:
nats bench sub test --msgs 10000000 --clients 2 --no-progress & nats bench pub test --msgs 10000000 --clients 2 --no-progress
JetStream testing can be done by using the nats bench js command. You can for example measure first the speed of publishing into a stream (that gets created first).
Similarily you can benchmark synchronous request-reply type of interactions using the NATS service functionality through the nats service serve and nats service request commands. For example you can first start 2 service instances in one window
nats bench service serve test.service --clients 2
And then run a benchmark with 10 synchronous requesters in another window
nats bench service request test.service --clients 10 --no-progress
There are numerous other flags that can be set to configure size of messages, using fetch or consume for JetStream consumers and much more, see nats bench and nats cheat bench for some examples.
Latency
Latency is the rate at which messages can cross your network, with the nats CLI you can connect a publisher and subscriber
to your NATS network and measure the latency between the publisher and subscriber.
==============================
Pub Server RTT : 64µs
Sub Server RTT : 70µs
Message Payload: 8B
Target Duration: 5s
Target Msgs/Sec: 500000
Target Band/Sec: 7.6M
==============================
HDR Percentiles:
10: 57µs
50: 94µs
75: 122µs
90: 162µs
99: 314µs
99.9: 490µs
99.99: 764µs
99.999: 863µs
99.9999: 886µs
99.99999: 1.483ms
100: 1.483ms
==============================
Actual Msgs/Sec: 499990
Actual Band/Sec: 7.6M
Minimum Latency: 25µs
Median Latency : 94µs
Maximum Latency: 1.483ms
1st Sent Wall Time : 3.091ms
Last Sent Wall Time: 5.000098s
Last Recv Wall Time: 5.000168s
Various flags exist to adjust message size and target rates, see nats latency --help
Super Cluster observation
NATS publish a number of events and have a Request-Reply API that expose a wealth of internal information about the
state of the network.
For most of these features you will need a System Account
enabled, most of these commands are run against that account.
I create a system context before running these commands and pass that to the commands.
Lifecycle Events
nats event --context system
Output
Listening for Client Connection events on $SYS.ACCOUNT.*.CONNECT
Listening for Client Disconnection events on $SYS.ACCOUNT.*.DISCONNECT
Listening for Authentication Errors events on $SYS.SERVER.*.CLIENT.AUTH.ERR
[12:18:35] [puGCIK5UcWUxBXJ52q4Hti] Client Connection
Server: nc1-c1
Cluster: c1
Client:
ID: 17
User: one
Name: NATS CLI Version development
Account: one
Library Version: 1.11.0 Language: go
Host: 172.21.0.1
[12:18:35] [puGCIK5UcWUxBXJ52q4Hw8] Client Disconnection
Reason: Client Closed
Server: nc1-c1
Cluster: c1
Client:
ID: 17
User: one
Name: NATS CLI Version development
Account: one
Library Version: 1.11.0 Language: go
Host: 172.21.0.1
Stats:
Received: 0 messages (0 B)
Published: 1 messages (0 B)
RTT: 1.551714ms
Here one can see a client connected and disconnected shortly after, several other system events are supported.
If an account is running JetStream the nats event tool can also be used to look at JetStream advisories by passing
--js-metric --js-advisory
These events are JSON messages and can be viewed raw using --json or in Cloud Events format with --cloudevent,
finally a short version of the messages can be shown:
nats event --short
Output
Listening for Client Connection events on $SYS.ACCOUNT.*.CONNECT
Listening for Client Disconnection events on $SYS.ACCOUNT.*.DISCONNECT
Listening for Authentication Errors events on $SYS.SERVER.*.CLIENT.AUTH.ERR
12:20:58 [Connection] user: one cid: 19 in account one
12:20:58 [Disconnection] user: one cid: 19 in account one: Client Closed
12:21:00 [Connection] user: one cid: 20 in account one
12:21:00 [Disconnection] user: one cid: 20 in account one: Client Closed
12:21:00 [Connection] user: one cid: 21 in account one
Super Cluster Discovery and Observation
When a cluster or super cluster of NATS servers is configured with a system account a wealth of information is available
via internal APIs, the nats tool can interact with these and observe your servers.
A quick view of the available servers and your network RTT to each can be seen with nats server ping:
Data from a specific server can be accessed using it’s server name or ID:
nats server info nc1-c1
Output
Server information for nc1-c1 (NBNIKFCQZ3J6I7JDTUDHAH3Z3HOQYEYGZZ5HOS63BX47PS66NHPT2P72)
Process Details:
Version: 2.2.0-beta.34
Git Commit: 2e26d919
Go Version: go1.14.12
Start Time: 2020-12-03 12:18:00.423780567 +0000 UTC
Uptime: 10m1s
Connection Details:
Auth Required: true
TLS Required: false
Host: localhost:10000
Client URLs: localhost:10000
localhost:10002
localhost:10001
Limits:
Max Conn: 65536
Max Subs: 0
Max Payload: 1.0 MiB
TLS Timeout: 2s
Write Deadline: 10s
Statistics:
CPU Cores: 2 1.00%
Memory: 13 MiB
Connections: 1
Subscriptions: 0
Msgs: 240 in 687 out
Bytes: 151 KiB in 416 KiB out
Slow Consumers: 0
Cluster:
Name: c1
Host: 0.0.0.0:6222
URLs: nc1:6222
nc2:6222
nc3:6222
Super Cluster:
Name: c1
Host: 0.0.0.0:7222
Clusters: c1
c2
c3
Additional to this various reports can be generated using nats server report, this allows one to list all connections and
subscriptions across the entire cluster with filtering to limit the results by account etc.
Additional raw information in JSON format can be retrieved using the nats server request commands.
Monitoring
The nats server check command provides numerous monitoring utilities that supports the popular Nagios exist code based
protocol, a format compatible with Prometheus textfile format and a human friendly textual output.
Using these tools one can create monitors for various aspects of NATS Server, JetStream and KV.
Stream and Consumer monitoring
The nats server check stream and nats server check consumer commands can be used to monitor the health of Streams and
Consumers.
We’ll cover the flags below but since version 0.2.0 these commands support auto configuration from Metadata on the
Stream and Consumer. For example if the command accepts --msgs-warn then the metadata io.nats.monitor.msgs-warn
can be used to set the same value. Calling the check command without passing the value on the command will use the
metadata value instead.
Streams
The stream check command allows the health of a stream to be monitored including Sources, Mirrors, Cluster Health
and more.
To perform end to end health checks on a stream it is suggested that canary messages are published regularly into the
stream with clients detecting those and discarding them after ACK.
The nats server check message command can be used to check such canary messages exist in the stream, how old they
are and if the content is correct. We suggest using this in complex Sourcing and Mirroring setups to perform an
additional out-of-band health check on the flow of messages. This includes checking timestamps on the messages.
--lag-critical=MSGS Critical threshold to allow for lag on any source or mirror. Lag is how many tasks the source or
mirror is behind, this means the mirror or source do not have complete data and would require fixing.
--seen-critical=DURATION Critical threshold for how long ago the source or mirror should have been seen. During
network outages or problems with the foreign Stream this time would increase. The duration can be a string like 5m.
--min-sources=SOURCES, --max-sources=SOURCES Minimum and Maximum number of sources to expect, this allow you to
monitor that in a dynamically configured environment that the set number of sources are configured.
--peer-expect=SERVERS Number of cluster replicas to expect, again allowing an assertion that the configuration does
not change unexpectedly
--peer-lag-critical=OPS Critical threshold to allow for cluster peer lag, any RAFT peer that is further behind than
this number of operations will result in a critical error
--peer-seen-critical=DURATION Critical threshold for how long ago a cluster peer should have been seen, this is
sumular to the lag on Sources and Mirrors but checks the lag in the Raft cluster.
--msgs-warn=MSGS and --msgs-critical=MSGS Checks the number of messages in the stream, if warn is smaller than
critical the check will alert for fewer messages than the thresholds.
--subjects-warn=SUBJECTS and --subjects-critical=SUBJECTS Checks the number of subjects in the stream. If warn is
bigger than critical the logic will be inverted ensuring that no more than the thresholds exist in the stream.
Consumers
The consumer check is concerned with message flow through a consumer and have various adjustable thresholds in duration
and count to detect stalled consumers, consumers with no active clients, consumers with slow clients or ones where
processing the messages are failing.
A suggested pattern is publishing canary messages into the stream regularly, perhaps with the header Canary: 1 set,
and having applications just ACK and discard those messages. This way even in idle times the end to end flow of messages
can be monitored.
--outstanding-ack-critical=-1 Maximum number of outstanding acks to allow, this allow you to alert on the scenario
where clients consuming messages are slow to process messages and the number of outstanding acks are growing. Once this
hits the configured max the consumer will stall.
--waiting-critical=-1 Maximum number of waiting pulls to allow
--unprocessed-critical=-1 Maximum number of unprocessed messages to allow, this indicates how far behind the end
of the stream the consumer is, in work queue scenarios this will indicate a alert if the amount of outstanding work
grows.
--last-delivery-critical=0s This is the time duration since the last delivery to a client, if this number grows it
could mean there are no messages to deliver or no clients to deliver messages to.
--last-ack-critical=0s This is the duration of time since the last message was acknowledged, this duration might
indicate that no messages are being successfully processed.
--redelivery-critical=-1 Alerts on the number of redeliveries currently in flight, a high number means many clients
are doing NAKs or not completing message processing within the allowed Ack window.
Schema Registry
We are adopting JSON Schema to describe the core data formats of events and advisories - as shown by nats event. Additionally
all the API interactions with the JetStream API is documented using the same format.
These schemas can be used using tools like QuickType to generate stubs for various programming
languages.
The nats CLI allows you to view these schemas and validate documents using these schemas.
The schemas can be limited using a regular expression, try nats schema ls request to see all API requests.
Schemas can be viewed in their raw JSON or YAML formats using nats schema info io.nats.jetstream.advisory.v1.consumer_action,
these schemas include descriptions about each field and more.
Finally, if you are interacting with the API using JSON request messages constructed using languages that is not supported
by our own management libraries you can use this tool to validate your messages:
Validation errors in request.json:
retention: retention must be one of the following: "limits", "interest", "workqueue"
(root): Must validate all the schemas (allOf)
Here I validate request.json against the Schema that describes the API to create Streams, the validation indicates
that I have an incorrect value in the retention field.
The NATS Command Line Interface
A command line utility to interact with and manage NATS.
Features
Installation
Releases are published to GitHub where Zip, RPMs and DEBs for various operating systems can be found.
Installation via go install
The nats cli can be installed directly via
go install. To install the latest version:To install a specific release:
macOS installation via Homebrew
For macOS
brewcan be used to install the latest released version:Windows installation via scoop
On Windows, scoop has the latest released version:
Arch Linux installation via yay
For Arch users there is an AUR package that you can install with:
Nightly docker images
Nightly builds are included in the
synadia/nats-server:nightlyDocker images.Configuration Contexts
The
natsCLI supports multiple named configurations, for the rest of the document we’ll interact viademo.nats.io. To enable this we’ll create ademoconfiguration and set it as default.First we add a configuration to capture the default
localhostconfiguration.Output
Next we add a context for
demo.nats.io:4222and we select it as default.Output
These are the contexts, the
*indicates the defaultOutput
The context is selected as default, use
nats context --helpto see how to add, remove and edit contexts.To switch to another context we can use:
To switch context back to previous one, we can use
context previoussubcommand:Configuration file
nats-cli stores contextes in
~/.config/nats/context. Those contextes are stored as JSON documents. You can find the description and expected value for this configuration file by runningnats --helpand look for the global flags.JetStream management
For full information on managing JetStream please refer to the JetStream Documentation
As of nats-server v2.2.0 JetStream is GA.
Publish and Subscribe
The
natsCLI can publish messages and subscribe to subjects.Basic Behaviours
We will subscribe to the
cli.demosubject:Output
We can now publish messages to the
cli.demosubject.First we publish a single message:
Output
Next we publish 5 messages with a counter and timestamp in the format
message 5 @ 2020-12-03T12:33:18+01:00:Output
We can also publish messages read from STDIN:
Output
Finally, NATS supports HTTP style headers and the CLI behaves like
curl:Output
The receiver will show:
Output
match requests and replies
We can print matching replay-requests together
Output
sub –match-replies –dump subject.name
JetStream
When receiving messages from a JetStream Push Consumer messages can be acknowledged when received by passing
--ack, the message metadata is also produced:Output
Queue Groups
When subscribers join a Queue Group the messages are randomly load shared within the group. Perform the following subscribe in 2 or more shells and then publish messages using some of the methods shown above, these messages will only be received by one of the subscribers at a time.
Service Requests and Creation
NATS supports a RPC mechanism where a service received Requests and replies with data in response.
Output
In another shell we can send a request to this service:
Output
This shows that the service round trip was 404ms, and we can see the response
Weather Service.To make this a bit more interesting we can interact with the
wttr.inweb service:Output
We can perform the same request again:
Output
Now the
natsCLI parses the subject, extracts the{london,newyork}from the subjects and callscurl, replacing{{2}}with the body of the 2nd subject token -{london,newyork}.Translating message data using a converter command
Additional to the raw output of messages using
nats subandnats stream viewyou can also translate the message data by running it through a command.The command receives the message data as raw bytes through stdin and the output of the command will be the shown output for the message. There is the additional possibility to add the filter subject by using
{{Subject}}as part of the arguments for the tranlation command.Examples for using the translation feature:
Here we use the jq tool to format our json message payload into a more readable format:
We subscribe to a subject that will receive json data.
Now we publish some example data.
The Output will show the message formatted.
Another example is creating hex dumps from any message to avoid terminal corruption.
By changing the subscription into:
We will get the following output for the same published msg:
Examples for using the translation feature with template:
A somewhat artificial example using the subject as argument would be:
Output
The translation feature makes it possible to write specialized or universal translators to aid in debugging messages in streams or core nats.
Benchmarking and Latency Testing
Benchmarking and latency testing is key requirement for evaluating the production preparedness of your NATS network.
Benchmarking
Here we’ll run these benchmarks against a local server instead of
demo.nats.io.Output
We can benchmark core NATS publishing performance, here we publish 10 million messages from 2 concurrent publishers. By default, messages are published as quick as possible without any acknowledgement or confirmations:
Output
Run a
nats bench subinstance with two concurrent subscribers on the same subject (so a fan out of 1 to 2) while publishing messages to measure the rate of messages being delivered:Output
JetStream testing can be done by using the
nats bench jscommand. You can for example measure first the speed of publishing into a stream (that gets created first).Output
And then you can for example measure the speed of receiving (i.e. replay) the messages from the stream using ordered consumers
Output
Similarily you can benchmark synchronous request-reply type of interactions using the NATS service functionality through the
nats service serveandnats service requestcommands. For example you can first start 2 service instances in one windowAnd then run a benchmark with 10 synchronous requesters in another window
Output
There are numerous other flags that can be set to configure size of messages, using fetch or consume for JetStream consumers and much more, see
nats benchandnats cheat benchfor some examples.Latency
Latency is the rate at which messages can cross your network, with the
natsCLI you can connect a publisher and subscriber to your NATS network and measure the latency between the publisher and subscriber.Output
Various flags exist to adjust message size and target rates, see
nats latency --helpSuper Cluster observation
NATS publish a number of events and have a Request-Reply API that expose a wealth of internal information about the state of the network.
For most of these features you will need a System Account enabled, most of these commands are run against that account.
I create a
systemcontext before running these commands and pass that to the commands.Lifecycle Events
Output
Here one can see a client connected and disconnected shortly after, several other system events are supported.
If an account is running JetStream the
nats eventtool can also be used to look at JetStream advisories by passing--js-metric --js-advisoryThese events are JSON messages and can be viewed raw using
--jsonor in Cloud Events format with--cloudevent, finally a short version of the messages can be shown:Output
Super Cluster Discovery and Observation
When a cluster or super cluster of NATS servers is configured with a system account a wealth of information is available via internal APIs, the
natstool can interact with these and observe your servers.A quick view of the available servers and your network RTT to each can be seen with
nats server ping:Output
A general server overview can be seen with
nats server list:Output
Data from a specific server can be accessed using it’s server name or ID:
Output
Additional to this various reports can be generated using
nats server report, this allows one to list all connections and subscriptions across the entire cluster with filtering to limit the results by account etc.Additional raw information in JSON format can be retrieved using the
nats server requestcommands.Monitoring
The
nats server checkcommand provides numerous monitoring utilities that supports the popular Nagios exist code based protocol, a format compatible with Prometheustextfileformat and a human friendly textual output.Using these tools one can create monitors for various aspects of NATS Server, JetStream and KV.
Stream and Consumer monitoring
The
nats server check streamandnats server check consumercommands can be used to monitor the health of Streams and Consumers.We’ll cover the flags below but since version
0.2.0these commands support auto configuration from Metadata on the Stream and Consumer. For example if the command accepts--msgs-warnthen the metadataio.nats.monitor.msgs-warncan be used to set the same value. Calling the check command without passing the value on the command will use the metadata value instead.Streams
The stream check command allows the health of a stream to be monitored including Sources, Mirrors, Cluster Health and more.
To perform end to end health checks on a stream it is suggested that canary messages are published regularly into the stream with clients detecting those and discarding them after ACK.
The
nats server check messagecommand can be used to check such canary messages exist in the stream, how old they are and if the content is correct. We suggest using this in complex Sourcing and Mirroring setups to perform an additional out-of-band health check on the flow of messages. This includes checking timestamps on the messages.--lag-critical=MSGSCritical threshold to allow for lag on any source or mirror. Lag is how many tasks the source or mirror is behind, this means the mirror or source do not have complete data and would require fixing.--seen-critical=DURATIONCritical threshold for how long ago the source or mirror should have been seen. During network outages or problems with the foreign Stream this time would increase. The duration can be a string like5m.--min-sources=SOURCES,--max-sources=SOURCESMinimum and Maximum number of sources to expect, this allow you to monitor that in a dynamically configured environment that the set number of sources are configured.--peer-expect=SERVERSNumber of cluster replicas to expect, again allowing an assertion that the configuration does not change unexpectedly--peer-lag-critical=OPSCritical threshold to allow for cluster peer lag, any RAFT peer that is further behind than this number of operations will result in a critical error--peer-seen-critical=DURATIONCritical threshold for how long ago a cluster peer should have been seen, this is sumular to the lag on Sources and Mirrors but checks the lag in the Raft cluster.--msgs-warn=MSGSand--msgs-critical=MSGSChecks the number of messages in the stream, if warn is smaller than critical the check will alert for fewer messages than the thresholds.--subjects-warn=SUBJECTSand--subjects-critical=SUBJECTSChecks the number of subjects in the stream. If warn is bigger than critical the logic will be inverted ensuring that no more than the thresholds exist in the stream.Consumers
The consumer check is concerned with message flow through a consumer and have various adjustable thresholds in duration and count to detect stalled consumers, consumers with no active clients, consumers with slow clients or ones where processing the messages are failing.
A suggested pattern is publishing canary messages into the stream regularly, perhaps with the header
Canary: 1set, and having applications just ACK and discard those messages. This way even in idle times the end to end flow of messages can be monitored.--outstanding-ack-critical=-1Maximum number of outstanding acks to allow, this allow you to alert on the scenario where clients consuming messages are slow to process messages and the number of outstanding acks are growing. Once this hits the configured max the consumer will stall.--waiting-critical=-1Maximum number of waiting pulls to allow--unprocessed-critical=-1Maximum number of unprocessed messages to allow, this indicates how far behind the end of the stream the consumer is, in work queue scenarios this will indicate a alert if the amount of outstanding work grows.--last-delivery-critical=0sThis is the time duration since the last delivery to a client, if this number grows it could mean there are no messages to deliver or no clients to deliver messages to.--last-ack-critical=0sThis is the duration of time since the last message was acknowledged, this duration might indicate that no messages are being successfully processed.--redelivery-critical=-1Alerts on the number of redeliveries currently in flight, a high number means many clients are doing NAKs or not completing message processing within the allowed Ack window.Schema Registry
We are adopting JSON Schema to describe the core data formats of events and advisories - as shown by
nats event. Additionally all the API interactions with the JetStream API is documented using the same format.These schemas can be used using tools like QuickType to generate stubs for various programming languages.
The
natsCLI allows you to view these schemas and validate documents using these schemas.Output
The schemas can be limited using a regular expression, try
nats schema ls requestto see all API requests.Schemas can be viewed in their raw JSON or YAML formats using
nats schema info io.nats.jetstream.advisory.v1.consumer_action, these schemas include descriptions about each field and more.Finally, if you are interacting with the API using JSON request messages constructed using languages that is not supported by our own management libraries you can use this tool to validate your messages:
Output
Here I validate
request.jsonagainst the Schema that describes the API to create Streams, the validation indicates that I have an incorrect value in theretentionfield.