A Console Routine on Channeling NATS. Let's GO!

TL;DR

This article is a short example of NATS - an open source messaging system, TVIEW - Golang based Terminal UI library & concurrency in Golang using Channels and Goroutines. The source code can be found on Github.

What

I have recently started working on a personal Golang project to develop something similar to Uptime Kuma which is a self-hosted monitoring tool. I believe it’s a perfect project with the right complexity to learn advanced Golang concepts.

Short video showcasing nats and golang channels with goroutines

The project code in Github has 4 directories:

  1. Server - The central server which stores servers & resource details, configs etc.
  2. Client - TUI based native client app to manage resources that need to be monitored. Also gives TUI based views into historical/real-time monitoring data.
  3. Agent - Remote light-weight agent running on target host servers responsible for shipping telemetry data to the central server.
  4. Proto - Protobuf based API contracts between server, client & agent.

NATS

NATS is a simple, secure and performant communications system and data layer for digital systems, services and devices. It’s used in this example project for pub-sub messaging (shipping) telemetry data from target remote servers to the central server.

The NATS server

To keep things simple for this example, we are using an embedded NATS server which is setup in the central server as below:

1opts := &natsServer.Options{
2 Port: 4222,
3 }
4ns, err := natsServer.NewServer(opts)
5if err != nil {
6 log.Fatal(err)
7}
8go ns.Start()

Line #2 congfigures the embedded NATS server to run on port 4222.

Line #8 go ns.Start() The go keyword creates a new goroutine, which is a lightweight thread managed by the Go runtime. Goroutines are a fundamental part of Go’s concurrency model.

When this line executes, ns.Start() runs concurrently with the rest of the program. The main execution doesn’t wait for this method to complete but continues immediately to the next line. Such pattern is commonly used when launching background services, servers, or long-running processes that need to operate independently of the main program flow.

Publishing NATS message

Let’s look at the agent code which publishes a Protobuf message every second.

1func main() {
2 nc, err := nats.Connect("nats://127.0.0.1:4222")
3 if err != nil {
4 log.Fatal(err)
5 }
6 defer nc.Close()
7 log.Println("Connected to NATS server at", nc.ConnectedUrl())
8 for c := time.Tick(1 * time.Second); ; <-c {
9 v, _ := mem.VirtualMemory()
10 cpuPercent, _ := cpu.Percent(0, false)
11 diskUsage, _ := disk.Usage("/")
12
13 stats := &api.BaseStatsReply{
14 Cpu: &cpuPercent[0],
15 Mem: &v.UsedPercent,
16 Disk: &diskUsage.UsedPercent,
17 }
18 msg, err := proto.Marshal(stats)
19 if err != nil {
20 log.Fatal(err)
21 }
22 err = nc.Publish("host.stats.1", msg)
23 if err != nil {
24 log.Fatal(err)
25 }
26 log.Printf("Published message to NATS server, #%v", stats)
27 }
28}

Line 2 nats.Connect("nats://127.0.0.1:4222") initializes client connection to the NATS server.

Line 18 serializes the server vitals including CPU, Memory & Disk utilization to a Protobuf message.

Line 22 nc.Publish("host.stats.1", msg) publishes the Protobuf based serialized message to the central server on the NATS subject host.stats.1.

Subscribing to NATS message

Now let’s look at the client code which subscribes to messages published on the subject host.stats.1

1go func() {
2 _, err := nc.Subscribe("host.stats.1", func(msg *nats.Msg) {
3 rcvData := &api.BaseStatsReply{}
4 err = proto.Unmarshal(msg.Data, rcvData)
5 if err != nil {
6 log.Fatal(err)
7 }
8 app.QueueUpdateDraw(func() {
9 cpuTxt.SetText(fmt.Sprintf("CPU: %.2f %%", *rcvData.Cpu))
10 diskTxt.SetText(fmt.Sprintf("Disk: %.2f %%", *rcvData.Disk))
11 memTxt.SetText(fmt.Sprintf("Memory: %.2f %%", *rcvData.Mem))
12 })
13 })
14 if err != nil {
15 log.Fatal(err)
16 }
17}()

Line 2 nc.Subscribe("host.stats.1", func(msg *nats.Msg) is where the TUI (client) program subscribes to messages published on the subject host.stats.1. This is done in a Goroutine to avoid blocking the main thread which is responsible for drawing the UI and listening to events like key presses. Every time a message is received on this subscription, the new stats are set to existing text view. This is done in QueueUpdateDraw callback; which refreshes the screen immediately after executing the callback function.

Golang Channels and Routines

As seen above, a lot of the code is reliant on Goroutines including starting NATS server in the central server code. The Client TUI code using @rivo’s tview potentially will be using a lot of Goroutines and passing messages through Golang channels. This is to keep the main thread unblocked for drawing/updating UI and capture events.

Channels are the pipes that connect concurrent goroutines. You can send values into channels from one goroutine and receive those values into another goroutine.

We will cover more of Channels and Goroutines in upcoming articles focused on the client TUI program.

Bonus - Protobuf

Protobuf has been a bonus in this whole adventure. It’s generally more commonly used in conjunction with grpc. However, it was a real kicker to use it with NATS as a messaging format. Being a binary format, it’s perfect for such a use case which would demand high throughput, performance and tiny size over the wire. This is where NATS shines as well as it is agnostic to serialization strategy being used. Another added benefit of using Protobuf with NATS is a tight API contract between all the services - although this is partially true as it is not a pure service contract like service definition in grpc but only applies to the message serialization format.

Why


My 3 point resolution for the rest of 2025
  1. Learn advanced concepts in Golang by building useful real-life projects.
  2. (Try to) Build a better Golang based open-source alternative to https://uptime.kuma.pet.
  3. To pique my interest in terminal user interface (TUI).

I have been using Uptime Kuma since a couple of years and have been loving it. However, it’s missing a few things like remote server resources (viz. CPU, RAM etc.) monitoring, user management. Sometimes it can also become a reource hog especially since it’s written in JS. Perhaps, that can be resolved substantially in Golang. I also like the idea of remote executors i.e. runners which can run off different infra/servers in an Actor model by running the scheduled monitoring tasks and pushing the results to the central server.

I am aware of Prometheus, Grafana et al., however, I feel it’s an overkill for my medium scale requirements. Uptime Kuma is good for monitoring small scale setup with a few servers and endpoints. Prometheus and its like are excellent for enterprise level load. I hope to build a solution for the in-between medium scale requirements.

Still not convinced? Read my 3 point resolution for this year.


Signing off,

-RR