Rand Stats




Sparky is a flexible and minimalist continuous integration server and distribute tasks runner written in Raku.

Sparky Logo

Sparky features:

Build status

Github actions SparrowCI

Sparky workflow in 4 lines:

$ nohup sparkyd & # run Sparky daemon to trigger jobs
$ nohup cro run & # run Sparky CI UI to see job statuses and reports
$ nano ~/.sparky/projects/my-project/sparrowfile  # write a job scenario
$ firefox # run jobs and get reports


$ sudo apt-get install sqlite3
$ git clone https://github.com/melezhik/sparky.git
$ cd sparky && zef install .

Database initialization

Sparky requires a database to operate.

Run database initialization script to populate database schema:

$ raku db-init.raku

Sparky components

Sparky comprises of several components:

Job scheduler

To run Sparky jobs scheduler (aka daemon) runs in console:

$ sparkyd

Scheduler logic:

$ sparkyd --timeout=600 # sleep 10 minutes
$ SPARKY_TIMEOUT=30 sparkyd ...

Running job scheduler in demonized mode:

$ nohup sparkyd &

To install sparkyd as a systemd unit:

$ nano utils/install-sparky-web-systemd.raku # change working directory and user
$ sparrowdo --sparrowfile=utils/install-sparkyd-systemd.raku --no_sudo --localhost

Sparky Jobs UI

Sparky has a simple web UI to allow trigger jobs and get reports.

To run Sparky UI web application:

$ cro run

To install Sparky CI web app as a systemd unit:

$ nano utils/install-sparky-web-systemd.raku # change working directory, user and root directory
$ sparrowdo --sparrowfile=utils/install-sparky-web-systemd.raku --no_sudo --localhost

By default Sparky UI application listens on host, port 4000, to override these settings set SPARKY_HOST, SPARKY_TCP_PORT in ~/sparky.yaml configuration file:


Sparky jobs definitions

Sparky job needs a directory located at the sparky root directory:

$ mkdir ~/.sparky/projects/teddy-bear-app

To create a job scenario, create file named sparrowfile located in job directory.

Sparky uses pure Raku for job syntax, for example:

$ nano ~/.sparky/projects/hello-world/sparrowfile
say "hello Sparky!";

To allow job to be executed by scheduler one need to create sparky.yaml - yaml based job definition, minimal form would be:

$ nano ~/.sparky/projects/hello-world/sparky.yaml
allow_manual_run: true

Extending scenarios with Sparrow automation framework

To extend core functions, Sparky is fully integrated with Sparrow automation framework.

Here in example of job that uses Sparrow plugins, to build typical Raku project:

$ nano ~/.sparky/projects/raku-build/sparrowfile
directory "project";

git-scm 'https://github.com/melezhik/rakudist-teddy-bear.git', %(
  to => "project",

zef "{%*ENV<PWD>}/project", %( 
  depsonly => True 

zef 'TAP::Harness App::Prove6';

bash 'prove6 -l', %(
  debug => True,
  cwd => "{%*ENV<PWD>}/project/"

Repository of Sparrow plugins is available at https://sparrowhub.io

Sparky workers

Sparky uses Sparrowdo to launch jobs in three fashions:

/--------------------\                                             [ localhost ]
| Sparky on localhost| --> sparrowdo client --> job (sparrow) -->  [ container ]
\--------------------/                                             [ ssh host  ]

By default job scenarios get executed on the same machine you run Sparky at, to run jobs on remote host set sparrowdo section in sparky.yaml file:

$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
  host: ''
  ssh_private_key: /path/to/ssh_private/key.pem
  ssh_user: sparky
  no_index_update: true
  sync: /tmp/repo

Follow sparrowdo cli documentation for sparrowdo configuration section explanation.

Skip bootstrap

Sparrowdo client bootstrap might take some time.

To disable bootstrap use bootstrap: false option.

Useful if sparrowdo client is already installed on target host.

  bootstrap: false

Purging old builds

To remove old job builds set keep_builds parameter in sparky.yaml:

$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml

Put number of builds to keep:

keep_builds: 10

That makes Sparky remove old builds and only keep last keep_builds builds.

Run jobs by cron

To run Sparky jobs periodically, set crontab entry in sparky.yaml file.

For example, to run a job every hour at 30,50 or 55 minutes:

$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
crontab: "30,50,55 * * * *"

Follow Time::Crontab documentation on crontab entries format.

Manual run

To trigger job manually from web UI, use allow_manual_run:

$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
allow_manual_run: true

Trigger job by SCM changes

To trigger Sparky jobs on SCM changes, define scm section in sparky.yaml file:

  url: $SCM_URL
  branch: $SCM_BRANCH


For example:

  url: https://github.com/melezhik/rakudist-teddy-bear.git
  branch: master

Once a job is triggered respected SCM data is available via tags()<SCM_*> function:

directory "scm";

say "current commit is: {tags()<SCM_SHA>}";

git-scm tags()<SCM_URL>, %(
  to => "scm",
  branch => tags<SCM_BRANCH>

bash "ls -l {%*ENV<PWD>}/scm";

To set default values for SCM_URL and SCM_BRANCH, use sparrowdo tags:


    tags: SCM_URL=https://github.com/melezhik/rakudist-teddy-bear.git,SCM_BRANCH=master

These is useful when trigger job manually.

Flappers protection mechanism

Flapper protection mechanism kicks out SCM urls that are timeouted (certain amount of time) during git connection, from scheduling, this mechanism protects sparkyd worker from stalling.

To disable flappers protection mechanism, set SPARKY_FLAPPERS_OFF environment variable or adjust ~/sparky.yaml configuration file:

  flappers_off: true

Disable jobs

To prevent Sparky job from execution use disable option:

$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml

disabled: true

Advanced topics

Following are advanced topics covering some cool Sparky features.

Job UIs

Sparky UI DSL allows to grammatically describe UI for Sparky jobs and pass user input into a scenario as variables.

Read more at docs/ui.md

Downstream jobs

Downstream jobs get run after some main job has finished.

Read more at docs/downstream.md

Sparky triggering protocol (STP)

Sparky triggering protocol allows to trigger jobs automatically by creating files in special format.

Read more at docs/stp.md


Job API allows to orchestrate multiple Sparky jobs.

Read more at docs/job_api.md

Sparky plugins

Sparky plugins is way to extend Sparky jobs by writing reusable plugins as Raku modules.

Read more at docs/plugins.md


Sparky HTTP API allows execute Sparky jobs remotely over HTTP.

Read more at docs/api.md



Sparky web server comes with two authentication protocols, choose proper one depending on your requirements.

Read more at docs/auth.md


Sparky ACL allows to create access control lists to manage role based access to Sparky resources.

Read more at docs/acl.md

Databases support

Sparky keeps it's data in database, by default it uses sqlite, following databases are supported:

Read more at docs/database.md

TLS Support

Sparky web server may run on TLS. To enable this add a couple of parameters to ~/sparky.yaml

configuration file:

 private-key-file: '/home/user/.sparky/certs/www.example.com.key'
 certificate-file: '/home/user/.sparky/certs/www.example.com.cert'

SPARKY_USE_TLS enables SSL mode and tls section has paths to ssl certificate ( key and certificate parts ).

Additional topics

Sparky cli

Sparky cli allows to trigger jobs in terminal.

Read more at docs/cli.md

Sparky Environment variables

Use environment variables to tune Sparky configuration.

Read more at docs/env.md


Some useful glossary.

Read more at docs/glossary.md


Sparky uses Bulma as CSS framework for web UI.

Sparky job examples

Examples of various Sparky jobs could be found at examples/ folder.

See also


Alexey Melezhik