Rand Stats

LLM::Resources

zef:antononcube

LLM::Resources

Raku package with different subs and CLI scripts for specific but repeatable LLM-based workflows.

For usage examples see:


Installation

Preliminary installations

The code generation LLM-graphs use the package "DSL::Translators" which is not in the Zef ecosystem. Install it with this script.

Here is an example of such installation:

curl -O https://raw.githubusercontent.com/antononcube/RakuForPrediction-book/refs/heads/main/scripts/raku-dsl-install.sh
source raku-dsl-install.sh

To check successful installation use the following command in a terminal:

dsl-translation 'use dfTitanic; filter by sex is male; show counts'

The package installation

From Zef ecosystem:

zef install LLM::Resources

From GitHub:

zef install https://github.com/antononcube/Raku-LLM-Resources.git

Comprehensive text summarization

Here is the usage message of CLI script llm-text-summarization:

llm-text-summarization --help
# Usage:
#   llm-text-summarization <input> [--title|--with-title=<Str>] [--conf|--llm|--llm-conf[=Any]] [--async] [--progress] [-o|--output=<Str>] -- LLM-based comprehensive text summarization.
#   
#     <input>                          Text, file path, or a URL.
#     --title|--with-title=<Str>       Title of the result document; if 'Whatever' or 'Auto' then it is derived from the text. [default: 'Whatever']
#     --conf|--llm|--llm-conf[=Any]    LLM specification. (E.g. "gpt-5.2" or "openai::gpt-4.1-mini".) [default: 'chatgpt::gpt-5.1']
#     --async                          Whether to make the LLM calls interactively or not. [default: True]
#     --progress                       Whether to show progress or not. [default: True]
#     -o|--output=<Str>                Output location; if empty or '-' then stdout is used. [default: '-']

Here is an example usage:

llm-text-summarization some-large-text.txt -o summary.md --conf=ollama::gpt-oss:20b

Code generation

use LLM::Functions;
use LLM::Resources;

my $spec = q:to/END/;
new recommender object;
load dataset @dsData;
make document term matrix;
apply LSI functions IDF, None, Cosine; 
recommend by profile for passengerSex:male, and passengerClass:1st;
join across with @dsData on "id";
echo the pipeline value;
END

my $llm-evaluator = llm-evaluator('Ollama', model => 'gemma3:4b');
my $gBestCode = llm-resource-graph('code-generation-by-fallback', input => {:$spec, lang => 'Raku', :split}, :$llm-evaluator);
# LLM::Graph(size => 4, nodes => code, dsl-grammar, llm-examples, workflow-name)
$gBestCode.nodes<code><result>
# ML::SparseMatrixRecommender.new
# .load-dataset(dsData)
# .make-term-document-matrix()
# .apply-term-weight-functions(global-weight-function => 'IDF', local-weight-function => 'None', normalizer-function => 'Cosine')
# .recommend-by-profile({'passengerSex': 'male', 'passengerClass': '1st'})
# .join-across(@dsData, on => "id")
# .echo-value()

References

[AA1] Anton Antonov, "Agentic-AI for text summarization", (2025), RakuForPrediction at WordPress. (GitHub.)

[AA2] Anton Antonov, "Day 6 – Robust code generation combining grammars and LLMs", (2025), Raku Advent Calendar at WordPress. (GitHub, Wolfram Community.)