Cover image generated by Nano Banana 2
TL;DR
In this post, I introduce a simple Ansible playbook that manages scheduled tasks with systemd timers and provides an AI-friendly interface for running recurring jobs. It offers much greater flexibility than cron.
As a bonus, I also present a Python-based orchestrator for Rclone sync jobs, which is useful for setting up automatic cloud backups and works well with the Ansible playbook.
Motivation
systemd timers have become the modern standard task scheduler for Linux distributions, replacing the decades-old cron. They provide several benefits over using cron:
- Built-in, Centralized Logging: systemd integrates natively with the systemd journal. Job output is automatically captured, tagged, and easily queried using
journalctl -u your-service.service(-uspecifies the source unit/service of the log entries). - Flexible Scheduling: cron relies exclusively on real-time “wall-clock” calendar scheduling. systemd supports
OnCalendar=(similar to cron) as well as monotonic timers (events relative to system uptime / time since boot). - Randomized Delays: systemd timers feature a
RandomizedDelaySec=configuration option, which can be used to prevent all scheduled services from being triggered at exactly the same time (also known as a “thundering herd”). - Handling Missed Runs: Although
anacronalready provides similar functionality, it has a minimum granularity of one day. systemd timers withPersistent=yeswill run missed jobs at system boot, often with finer granularity.
Additional benefits include improved dependency management, execution environment predictability, protection against overlapping runs, and advanced resource control. Please see the relevant resources for details.
From a hybrid approach to fully adopting systemd timers
Despite these benefits, systemd timers are much more verbose to set up (requiring both a .service and a .timer file). The learning curve is also much steeper than with cron because of the complexity that comes with the more powerful feature set. For these reasons, I’d been using a hybrid approach until recently — systemd timers for important jobs, and cron for casual, error-tolerant jobs. Investigating a full migration to systemd timers hadn’t been a priority to me due to the low benefit-to-cost ratio.
The advent of coding agents changed this situation. I was able to conduct preliminary research on possible solutions, pick one that made the most sense to me, and implement it within an hour. Granted, I spent a couple more hours polishing the solution, but I would have spent multiple hours Googling and reading documentation before I could come up with a solution without the help of coding agents (or LLMs in general).
I’ve moved all my periodic jobs to systemd timers now, and I’m quite happy with the current solution. The ease of adding new jobs allows me to expand my automatic backup routine to more folders, with better, finer-grained frequency control. I also implemented an Rclone sync job orchestrator using coding agents to define and manage backup jobs. This orchestrator works perfectly with the playbook and systemd timers.
On a side note, there has recently been some panic selling in the stock market that stems from the fear that SaaS (Software-as-a-Service) and software in general will be disrupted by coding agents. The playbook and the Rclone sync orchestrator I developed definitely replace some features of backup SaaS products or other software, but I would not buy those products for my personal use anyway, so overall it’s been a net positive for me and not a net negative for any SaaS company. For anything serious enough that I would pay for, I would definitely not trust a solution primarily generated by coding agents in a few hours.
Note: I mainly use Codex CLI with GPT-5.3-Codex to build the two projects mentioned above.
Ansible-managed user-scoped systemd timers
After considering a few options, including Python scripts to run systemd commands and using complicated dot-file management tools, I think using Ansible makes the most sense for my use case. It parses config files that define the timers and the associated scripts or commands, and manages the .timer and .service files automatically. This config-based approach also works pretty well with coding agents, allowing them to manipulate the config file to automatically add, delete, and update tasks. It could be a component within a multi-step workflow.
I have published the Ansible playbook on GitHub at ceshine/ansible-systemd-user-cron under the MIT license. The most up-to-date documentation can be found in the project README. I’ll just briefly explain some high-level concepts and design choices here.
- User-Scoped Jobs Only: This Ansible playbook only supports user-scoped systemd timers. It removes the need for sudo privileges, which greatly reduces the chances of security issues or system disruptions (just make sure you do not run any of the scripts or commands the timers execute with sudo privileges). User-scoped systemd timers should suffice for most non-system-level periodic jobs. If you really need system-level ones, I suggest you create a fork of this project, modify it to support system-scoped timers, and run security audits on it before putting it to use.
- Prefix-based File Management: For the Ansible playbook to delete jobs that are no longer used automatically, it needs a way to identify jobs it created. The simplest way to do so is to add a unique prefix to the names of all the files it creates. This is not a perfect solution, as there’s still a small chance of name conflicts. However, if you pick the prefix carefully by avoiding any generic names, this should not be a problem most of the time.
- Overridable Default Config: By default, the playbook loads the configurations from
configs/regular.yml. It expects two root-level keys in the YAML file —task_prefixandscheduled_tasks. If you want to support multiple sets of configurations (i.e., multiple sets of jobs), you can use-e @configs/work.ymlto override the default configurations. Make sure to provide both root-level keys in the config files. You will most likely want to use different prefixes across the config files. - Config Validation: The playbook has some basic validation for the configurations it reads. They are not exhaustive, but should help prevent some common misuses and reduce time spent debugging issues. Note that not all systemd configurations are supported at the moment—only those that I currently need. Feel free to fork the project and add support for more features. A pull request for new features would be much appreciated as well.
The GitHub repository contains an example configuration file. For those interested, below is a simplified sample of one of my configuration files:
---
# Define your dynamic prefix here
task_prefix: "ces080-"
# Define your scheduled tasks here
scheduled_tasks:
- name: price-scrapers
working_directory: /home/ceshine/redacted/price-scrapers
command: /usr/bin/doppler run -- /home/ceshine/.local/bin/uv run python run_jobs.py --min-interval 12
schedule: "daily"
- name: backup-research-notes
command: /bin/bash /home/ceshine/research_notes/backup_research_notes.sh
schedule: "*-*-* 12,16,21:05"
- name: rclone-config-backup
command: /home/ceshine/.local/bin/uv run rclone-sync-runner -c configs/config_backup.yml
working_directory: /home/ceshine/redacted/rclone-sync-runner
timer:
on_boot_sec: 600
on_unit_active_sec: 14400
randomized_delay_sec: 120
Run journalctl --user -u your-job.service to read the execution logs for a specific job. Replace “your-job” with the name of the job you want to inspect.
Bonus: A Python package for defining and running Rclone sync jobs
After developing the Ansible playbook for managing systemd timers, I created a Python package that supports defining Rclone sync jobs using YAML files and executing the jobs sequentially with some basic result parsing and reporting functionality. It provides a few conveniences:
- Bundling Sync Tasks: Sometimes it makes sense to bundle several Rclone sync commands into one job instead of creating multiple systemd timers. First, it avoids cluttering the timer list. It also helps group sync commands that are semantically similar and require the same execution frequency. For example, I have a job that runs sync commands for configuration files and some automatically updated artifacts that are usually small. They’re grouped together for more frequent backups.
- Result Parsing: Internally, the Python package uses the subprocess module to run the Rclone sync command. It uses the
--use-json-logflag to let Rclone generate JSON log entries, allowing the package to retrieve a structured report of the sync results. For now, the results are displayed in a table powered by therichpackage at the end of execution. This is not easily achievable when running Rclone commands directly from a scheduled systemd service. - Result Notification: Currently, this package only prints the sync results to the terminal, which systemd captures and makes available via
journalctl. However, I have already implemented a simple hook/callback system that allows adding notification methods (e.g., Telegram) to notify users when a job fails or to report job completion.
The package is available on GitHub at ceshine/rclone-sync-runner under the MIT license. The most up-to-date documentation is in the project README.
Here’s an example config file (also available at configs/example.yml):
version: 1
global:
# Path/name of the rclone binary to execute.
rclone_bin: rclone
# One of: DEBUG, INFO, WARNING, ERROR, CRITICAL
log_level: INFO
# Continue running remaining jobs if one job fails.
continue_on_error: true
jobs:
- name: photos-to-backblaze
source: /srv/data/photos
destination: b2:home-archive/photos
extra_args:
- --fast-list
- --transfers=8
- --checkers=16
- --delete-during
- name: documents-to-onedrive
source: /srv/data/documents
destination: onedrive:backup/documents
extra_args:
- --exclude=**/.DS_Store
- --exclude=**/tmp/**
Note that the YAML schema is much more flexible than the the Ansible playbook’s, because the extra_args items are used directly as arguments to the rclone command.
This package has not yet been published to PyPI. To use it, clone the project and install it with uv tool install . or run uv run --project path/to/cloned/project. The executable is named rclone-sync-runner. It does not need to be used with the Ansible playbook; you can use it as a standalone tool.
Below are sample log entries generated by a scheduled systemd job:
Screenshot of Logs