If a server falls in the data center and nobody is monitoring it, does it make a sound? For my personal projects I got by without a server for a long time, but recently I found myself needing something more always-on where I can just run some services, listen on whatever ports I want, and reminisce about web development in the `90s. Itās on now ā take a look at this SingleStore-backed chart:
Since I already have a personal AWS account I chose Amazon Lightsail for a VPS. This is more user-friendly than EC2 for personal stuff. Lightsail comes with bare-bones monitoring on CPU and networking in the AWS console, but itās not very flexible.
Companies like DataDog come in with more advanced solutions. Unfortunately, their Pro plan costs $15 per host which is more expensive than the VPS it will be monitoring š. I just want some place to log my memory and CPU where I can visualize it over time ā so once more, Iāll turn to SingleStore.
SingleStore is a great database for time-series monitoring data:
SORT KEY
keeps time-series data sorted perfectly on disk and in bottomless object storage.Monitoring using SingleStore scales really well, and there are very large monitoring systems built on the SingleStore managed service. For my own tiny VPS I can just get by using SingleStoreās Free Tier, which has so far backed all my projects on this blog (and will continue to do so, at least until my company sponsors my personal projects with a larger instance size š).
OK first, if you just want some turn-key monitoring probably just pay the DataDog tax or use some other service. But if you are building a large monitoring system or SaaS product, then SingleStore can definitely be your database.
For this article I took the laziest of possible approaches. I found a turn-key monitoring agent called Telegraf and configured it to push to SingleStore Kai through its MongoDB output plugin. Hereās my dead-simple configuration:
[agent]
interval = "5s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "5s"
flush_jitter = "0s"
precision = ""
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
[[inputs.mem]]
[[outputs.mongodb]]
dsn = "mongodb://my connection string"
database = "jtdb"
I ran it like this as a systemd service:
[Unit]
Description=Telegraf Docker Container
After=network.target docker.service
Requires=docker.service
[Service]
Restart=always
ExecStart=/usr/bin/docker run --name telegraf \
-v /home/ec2-user/telegraf.conf:/etc/telegraf/telegraf.conf:ro \
telegraf
ExecStop=/usr/bin/docker stop telegraf
ExecStopPost=/usr/bin/docker rm telegraf
[Install]
WantedBy=multi-user.target
After that, data just appeared in my database. Telegraf automatically configured the collections as
MongoDB time-series collections, which through SingleStore Kai sets up the SORT KEY
automatically.
For the chart above I did what probably most developers would do in 2025: I asked ChatGPT to write me an auto-refreshing API and chart. It did pretty well. Hereās the part of the API that pulls from the database ā note the projection, which avoids loading all the other fields Telegraf includes from disk.
const cpu = await db
.collection("cpu")
.aggregate([
{ $match: { timestamp: { $gt: ago }, "tags.cpu": { $ne: "cpu-total" } } },
{
$project: {
_id: 0,
timestamp: 1,
usage_idle: 1,
tag: "$tags.cpu",
},
},
{ $sort: { timestamp: 1 } },
])
.toArray();
const mem = await db
.collection("mem")
.aggregate([
{ $match: { timestamp: { $gt: ago } } },
{
$project: {
_id: 0,
timestamp: 1,
used_percent: 1,
},
},
{ $sort: { timestamp: 1 } },
])
.toArray();
The client component then renders the data using chart.js
. This was the frustrating part, even
with ChatGPT, but together we got it done. The interesting part is probably since this is
time-series data, the client keeps track of the last data point received and only requests a delta.
const [data, setData] = useState([] as any[]);
const chartRef = useRef(null as any);
const fromRef = useRef(Math.floor(Date.now() / 1000) - lookback);
useEffect(() => {
const fetchMetrics = async () => {
try {
const response = await fetch(`/api/metrics?from=${fromRef.current}`);
const result = await response.json();
setData((prevData) => [...prevData, ...result]);
fromRef.current = result?.[result?.length - 1]?.timestamp ?? 0;
} catch (error) {
console.error(error);
}
};
const interval = setInterval(fetchMetrics, 5000);
fetchMetrics();
return () => clearInterval(interval);
}, []);
One more thing to note - since the SingleStore Free Tier limits the stored data to 1 GiB compressed, and I use it for ALL of my projects, I need something to expire old data. For this Iāve used SingleStoreās Job Service to run this every day and keep only a week of monitoring data.
DELETE FROM mem WHERE timestamp < DATE_ADD(NOW(), INTERVAL -7 DAY);
DELETE FROM cpu WHERE timestamp < DATE_ADD(NOW(), INTERVAL -7 DAY);
The monitoring use-case is another proven use of SingleStore. You might say, well you didnāt need it for one VPS. And thatās true. But this same approach scales up to many thousands of systems, as SingleStore is a horizontally-scalable distributed database. In addition, one thing Iāve noted before is that my SingleStoreās Free Tier instance is currently backing:
One of SingleStoreās strengths is its versatility, allowing one database instance to replace multiple special-purpose systems. When this can be used to reduce the overall work performed in a service (say by eliminating a costly ETL between different databases) it generally leads to lower costs overall and reduced complexity of failures. Any time one of those thousand-logo architecture diagrams can be simplified, itās generally a good thing. Iāll keep adding more use cases in further articles to reinforce this point. Thanks for reading!