Run Tomtit scenarios as cron jobs and more.
"What's in a name?"
Cromtit = Crontab + Tomtit
Run Tomtit jobs as cron jobs
Asynchronous jobs queue
Shared jobs artifacts
Throttling to protect a system from overload (TBD)
View job logs and reports via cro app web interface
Cromtit uses Sparky as job runner engine, so please install
and configure Sparky first:
zef install Cromtit
This example would restart Apache server every Sunday 08:00 local server time.
Create Bash task:
mkdir -p tasks/apache/restart
cat << HERE > tasks/apache/restart/task.bash
sudo apachectl graceful
Create Tomtit scenario:
tom --edit apache-restart
Create Cromtit configuration file
# should be a git repository with tomtit scenarios
crontab: "0 8 * * 0"
Commit changes to git repo:
echo ".cache" > .gitignore
git add .tom/ .gitignore jobs.yaml
git commit -a -m "apache restart"
git remote add origin firstname.lastname@example.org:melezhik/cromtit-cookbook.git
git branch -M main
git push -u origin main
Cromtit comes with configuration language allows use to define
jobs.yaml file and edit it. Then apply changes:
cromt --conf jobs.yaml
Configuration file specification
Cromtit configuration contains a list of Tomtit projects:
# list of Tomtit projects
crontab: "30 * * * *"
action: pull html-report
options: --no_index_update --dump_task
action: pull install
Project specific configuration
Every project item has a specific configuration:
# run `tom install`
# every one hour
crontab: "30 * * * *"
# with tomtit options:
# setting env variables:
Key should define a unique project name.
Should define name of tomtit scenario that will be run. Optional.
Multiple actions could be set as a space separated string:
# will trigger `tom pull` && `tom build` && `tom install`
action: pull build install
Tomtit project path. Optional.
Sets local directory path with Tomtit project:
Sets git repository with Tomtit project
One can use either
https:// schemes for git urls:
Triggering overs SCM changes. Use
trigger flag to automatically trigger job
over SCM changes:
# trigger a new job in case of
# any new changes (commits)
# arrive to a default branch
# if r3tool.git repo
To set a specific branch for triggering, use
Should represents crontab entry (how often and when to run a project), should
follow Sparky crontab format.
Optional. If not set, implies manual run.
# run every 10 minutes
crontab: "*/10 * * * *"
Tomtit cli options. Optional
options: --dump_task --verbose
Additional environment variables get passed to a job. Optional
# don't pass creds
# as clear text
Set Sparky API url. Optional. See "hosts.url" description.
Sparky project name. Optional. See "hosts.queue-id" description
Job title. Optional. See "Job description" section.
Override job sparrowdo configuration. Optional. For example:
# run job in docker container
# named raku-apline-repo
By default jobs get run on localhost.
To run jobs on specific hosts in parallel, use
# runs `tom update` on every host
# in parallel
Hosts list contains a list of Sparky API URLs (see also comment on optional url)
and hosts need to be a part of the same Sparky cluster.
Optionally every host could override vars:
And sparrowdo configurations:
url is optional, if omitted - a job gets on the same host, so this code will
run 3 jobs in parallel on the same host:
queue-id parameters are also applicable for
Projects might have dependencies, so that some jobs might be run before or after
a project's job:
after are list of objects that accept following parameters:
Project name. Required
Override project job action. Optional. See project action specification.
Override project job vars. Optional. See project vars specification.
Override project job sparrowdo configuration. Optional. See project sparrowdo configuration specification.
Override project job hosts. Optional. See project hosts specification.
Nested dependencies are allowed, so a dependency might have another dependency, so on.
Just be cautious about cycles. This should be directed acycling graph of dependencies.
One can set job timeout by using
# wait 1200 sec till all 4 jobs have finished
timeout set in a job with hosts parallelization will cause wait till all
hosts jobs have finished for
timeout seconds or raise "job timeout" exception
timeout for a single job (without hosts parallelization) will affect only this job,
will wait for
timeout second till a job finished
timeout set in dependent job (that have other job dependencies) will cause wait
timeout seconds till all dependencies jobs have finished or raise "job timeout" exception
hosts list executed in parallel, to enable sequential execution use
jobs, with the same
queue-id are executed in the same queue and thus executed one by one:
In this example jobs are executed in 2 parallel queues:
- hosts 192.168.0.1 - 192.168.0.3 are executed one by one in queue Q1
- hosts 192.168.0.4 - 192.168.0.5 are executed one by one in queue Q2
One can override standard job title appears in reports
title option into arbitrary level:
This example runs the same job 3 times in parallel, with job titles appears in report list as:
Jobs can share artifacts with each other:
In this example dependency job
fastspec-build copies file
.build/rakudo.tar.gz into internal storage
so that dependent job
fastspec-test would access it. The file will be located within tomtit scenario at
Dedicated storage server
Sometimes when hosts do not see each other directly (for example when some jobs get run on localhost )
a dedicated storage server could be an option, ensuring artifacts get copied and read from publicly accessed
Sparky API instance:
cromt is a Cromtit cli.
Path to cromtit configuration file to apply. Optional. Default value is
Sparky exposes a web UI to track projects, cron jobs and reports:
Configuration file examples
You can find a configuration file examples at
Cookbook.md file contains useful users scenarios
Existing Cromit based projects
God and Christ as "For the LORD gives wisdom; from his mouth come knowledge and understanding."