LLM::Resources
Raku package with different subs and CLI scripts for specific but repeatable LLM-based workflows.
For usage examples see:
There are several options for using LLMs with this package:
Installation
Preliminary installations (optional)
The code generation LLM-graphs use the package "DSL::Translators"
which is somewhat "heavy" to install because of its multiple dependencies.
A faster installation -- without testing -- can be done with this script.
Here is an example of such installation:
curl -O https://raw.githubusercontent.com/antononcube/RakuForPrediction-book/refs/heads/main/scripts/raku-dsl-install.sh
source raku-dsl-install.sh
To check successful installation use the following command in a terminal:
dsl-translation 'use dfTitanic; filter by sex is male; show counts'
(Meaningful code should be obtained.)
The package installation
From Zef ecosystem:
zef install LLM::Resources
From GitHub:
zef install https://github.com/antononcube/Raku-LLM-Resources.git
Comprehensive text summarization
Here is the usage message of CLI script llm-text-summarization:
llm-text-summarization --help
# Usage:
# llm-text-summarization <input> [--title|--with-title=<Str>] [--conf|--llm|--llm-conf[=Any]] [--async] [--progress] [-o|--output=<Str>] -- LLM-based comprehensive text summarization.
#
# <input> Text, file path, or a URL.
# --title|--with-title=<Str> Title of the result document; if 'Whatever' or 'Auto' then it is derived from the text. [default: 'Whatever']
# --conf|--llm|--llm-conf[=Any] LLM specification. (E.g. "gpt-5.2" or "openai::gpt-4.1-mini".) [default: 'chatgpt::gpt-5.1']
# --async Whether to make the LLM calls interactively or not. [default: True]
# --progress Whether to show progress or not. [default: True]
# -o|--output=<Str> Output location; if empty or '-' then stdout is used. [default: '-']
Here is an example usage:
llm-text-summarization some-large-text.txt -o summary.md --conf=ollama::gpt-oss:20b
Code generation
use LLM::Functions;
use LLM::Resources;
my $spec = q:to/END/;
new recommender object;
load dataset @dsData;
make document term matrix;
apply LSI functions IDF, None, Cosine;
recommend by profile for passengerSex:male, and passengerClass:1st;
join across with @dsData on "id";
echo the pipeline value;
END
my $llm-evaluator = llm-evaluator('Ollama', model => 'gemma3:4b');
my $gBestCode = llm-resource-graph('code-generation-by-fallback', input => {:$spec, lang => 'Raku', :split}, :$llm-evaluator);
# LLM::Graph(size => 4, nodes => code, dsl-grammar, llm-examples, workflow-name)
$gBestCode.nodes<code><result>
# ML::SparseMatrixRecommender.new
# .use-dataset(@dsData)
# .make-term-document-matrix()
# .apply-term-weight-functions('IDF', 'None', 'Cosine')
# .recommend-by-profile({'passengerSex'=> 'male', 'passengerClass'=> '1st'})
# .join-across(@dsData, on => "id")
# .echo-value()
References
Articles, blog posts
[AA1] Anton Antonov,
"Agentic-AI for text summarization",
(2025),
RakuForPrediction at WordPress.
(GitHub.)
[AA2] Anton Antonov,
"Day 6 – Robust code generation combining grammars and LLMs",
(2025),
Raku Advent Calendar at WordPress.
(GitHub,
Wolfram Community.)
Packages
[AAp1] Anton Antonov
LLM::Functions, Raku package,
(2023-2026),
GitHub/antononcube.
[AAp2] Anton Antonov
LLM::Prompts, Raku package,
(2023-2025),
GitHub/antononcube.
[AAp3] Anton Antonov
DSL::Examples, Raku package,
(2024-2025),
GitHub/antononcube.
[AAp4] Anton Antonov
DSL::Translators, Raku package,
(2020-2026),
GitHub/antononcube.
[AAp5] Anton Antonov
ML::NLPTemplateEngine, Raku package,
(2023-2025),
GitHub/antononcube.
[AAp6] Anton Antonov
WWW::OpenAI, Raku package,
(2023-2026),
GitHub/antononcube.
[AAp7] Anton Antonov
WWW::Gemini, Raku package,
(2023-2025),
GitHub/antononcube.
[AAp8] Anton Antonov
WWW::MistralAI, Raku package,
(2023-2024),
GitHub/antononcube.
[AAp9] Anton Antonov
WWW::LLaMA, Raku package,
(2024-2025),
GitHub/antononcube.
[AAp10] Anton Antonov
WWW::Ollama, Raku package,
(2026),
GitHub/antononcube.