LLM::Functions
In brief
This Raku package provides functions and function objects to access, interact, and utilize
Large Language Models (LLMs), like
OpenAI, [OAI1], and
PaLM, [ZG1].
For more details how the concrete LLMs are accessed see the packages
"WWW::OpenAI", [AAp2], and
"WWW::PaLM", [AAp3].
The LLM functions built by this package can have evaluators that use "sub-parsers" -- see
"ML::NLPTemplateEngine", [AAp4].
The primary motivation to have handy, configurable functions for utilizing LLMs
came from my work on the packages
"ML::FindTextualAnswer", [AAp6], and
"ML::NLPTemplateEngine", [AAp7].
A very similar system of functionalities is developed by Wolfram Research Inc.;
see the paclet
"LLMFunctions", [WRIp1].
For well curated and instructive examples of LLM prompts see the
Wolfram Prompt Repository.
The article
"Generating documents via templates and LLMs", [AA1],
shows an alternative way of streamlining LLMs usage. (Via Markdown, Org-mode, or Pod6 templates.)
Installation
Package installations from both sources use zef installer
(which should be bundled with the "standard" Rakudo installation file.)
To install the package from Zef ecosystem use the shell command:
zef install LLM::Functions
To install the package from the GitHub repository use the shell command:
zef install https://github.com/antononcube/Raku-LLM-Functions.git
Design
"Out of the box"
"LLM::Functions" uses
"WWW::OpenAI", [AAp2], and
"WWW::PaLM", [AAp3].
Other LLM access packages can be utilized via appropriate LLM configurations.
Configurations:
- Are instances of the class
LLM::Functions::Configuration
- Are used by instances of the class
LLM::Functions::Evaluator
- Can be converted to Hash objects (i.e. have a
.Hash
method)
New LLM functions are constructed with the function llm-function
.
The function llm-function
:
- Has the option "llm-evaluator" that takes evaluators, configurations, or string shorthands as values
- Returns anonymous functions (that access LLMs via evaluators/configurations.)
- Gives result functions that can be applied to different types of arguments depending on the first argument
- Can take a (sub-)parser argument for post-processing of LLM results
- Takes as a first argument a prompt that can be a:
- String
- Function with positional arguments
- Function with named arguments
Here is a sequence diagram that follows the steps of a typical creation procedure of
LLM configuration- and evaluator objects, and the corresponding LLM-function that utilizes them:
sequenceDiagram
participant User
participant llmfunc as llm-function
participant llmconf as llm-configuration
participant LLMConf as LLM configuration
participant LLMEval as LLM evaluator
participant AnonFunc as Anonymous function
User ->> llmfunc: ・prompt<br>・conf spec
llmfunc ->> llmconf: conf spec
llmconf ->> LLMConf: conf spec
LLMConf ->> LLMEval: wrap with
LLMEval ->> llmfunc: evaluator object
llmfunc ->> AnonFunc: create with:<br>・evaluator object<br>・prompt
AnonFunc ->> llmfunc: handle
llmfunc ->> User: handle
Here is a sequence diagram for making a LLM configuration with a global (engineered) prompt,
and using that configuration to generate a chat message response:
sequenceDiagram
participant WWWOpenAI as WWW::OpenAI
participant User
participant llmfunc as llm-function
participant llmconf as llm-configuration
participant LLMConf as LLM configuration
participant LLMChatEval as LLM chat evaluator
participant AnonFunc as Anonymous function
User ->> llmconf: engineered prompt
llmconf ->> User: configuration object
User ->> llmfunc: ・prompt<br>・configuration object
llmfunc ->> LLMChatEval: configuration object
LLMChatEval ->> llmfunc: evaluator object
llmfunc ->> AnonFunc: create with:<br>・evaluator object<br>・prompt
AnonFunc ->> llmfunc: handle
llmfunc ->> User: handle
User ->> AnonFunc: invoke with<br>message argument
AnonFunc ->> WWWOpenAI: ・engineered prompt<br>・message
WWWOpenAI ->> User: LLM response
Configurations
OpenAI-based
Here is the default, OpenAI-based configuration:
use LLM::Functions;
.raku.say for llm-configuration('OpenAI').Hash;
# :stop-tokens($[])
# :examples($[])
# :tool-response-insertion-function(WhateverCode)
# :prompt-delimiter(" ")
# :model("text-davinci-003")
# :total-probability-cutoff(0.03)
# :tool-prompt("")
# :function(proto sub OpenAITextCompletion ($prompt is copy, :$model is copy = Whatever, :$suffix is copy = Whatever, :$max-tokens is copy = Whatever, :$temperature is copy = Whatever, Numeric :$top-p = 1, Int :$n where { ... } = 1, Bool :$stream = Bool::False, Bool :$echo = Bool::False, :$stop = Whatever, Numeric :$presence-penalty = 0, Numeric :$frequency-penalty = 0, :$best-of is copy = Whatever, :$auth-key is copy = Whatever, Int :$timeout where { ... } = 10, :$format is copy = Whatever, Str :$method = "tiny") {*})
# :tool-request-parser(WhateverCode)
# :argument-renames(${:api-key("auth-key"), :stop-tokens("stop")})
# :max-tokens(300)
# :api-user-id("user:357821092670")
# :prompts($[])
# :evaluator(Whatever)
# :format("values")
# :tools($[])
# :api-key(Whatever)
# :module("WWW::OpenAI")
# :name("openai")
# :temperature(0.8)
Here is the ChatGPT-based configuration:
.say for llm-configuration('ChatGPT').Hash;
# prompt-delimiter =>
# format => values
# examples => []
# tool-prompt =>
# function => &OpenAIChatCompletion
# total-probability-cutoff => 0.03
# module => WWW::OpenAI
# api-key => (Whatever)
# stop-tokens => []
# max-tokens => 300
# tool-request-parser => (WhateverCode)
# api-user-id => user:958546605799
# argument-renames => {api-key => auth-key, stop-tokens => stop}
# temperature => 0.8
# model => gpt-3.5-turbo
# tool-response-insertion-function => (WhateverCode)
# prompts => []
# name => chatgpt
# evaluator => (my \LLM::Functions::EvaluatorChat_2514878444792 = LLM::Functions::EvaluatorChat.new(context => "", examples => Whatever, user-role => "user", assitant-role => "assistant", system-role => "system", conf => LLM::Functions::Configuration.new(name => "chatgpt", api-key => Whatever, api-user-id => "user:958546605799", module => "WWW::OpenAI", model => "gpt-3.5-turbo", function => proto sub OpenAIChatCompletion ($prompt is copy, :$type is copy = Whatever, :$role is copy = Whatever, :$model is copy = Whatever, :$temperature is copy = Whatever, :$max-tokens is copy = Whatever, Numeric :$top-p = 1, Int :$n where { ... } = 1, Bool :$stream = Bool::False, :$stop = Whatever, Numeric :$presence-penalty = 0, Numeric :$frequency-penalty = 0, :$auth-key is copy = Whatever, Int :$timeout where { ... } = 10, :$format is copy = Whatever, Str :$method = "tiny") {*}, temperature => 0.8, total-probability-cutoff => 0.03, max-tokens => 300, format => "values", prompts => [], prompt-delimiter => " ", examples => [], stop-tokens => [], tools => [], tool-prompt => "", tool-request-parser => WhateverCode, tool-response-insertion-function => WhateverCode, argument-renames => {:api-key("auth-key"), :stop-tokens("stop")}, evaluator => LLM::Functions::EvaluatorChat_2514878444792), formatron => "Str"))
# tools => []
Remark: llm-configuration(Whatever)
is equivalent to llm-configuration('OpenAI')
.
Remark: Both the "OpenAI" and "ChatGPT" configuration use functions of the package "WWW::OpenAI", [AAp2].
The "OpenAI" configuration is for text-completions;
the "ChatGPT" configuration is for chat-completions.
PaLM-based
Here is the default PaLM configuration:
.say for llm-configuration('PaLM').Hash;
# api-key => (Whatever)
# model => text-bison-001
# tool-response-insertion-function => (WhateverCode)
# format => values
# stop-tokens => []
# module => WWW::PaLM
# tool-prompt =>
# evaluator => (Whatever)
# function => &PaLMGenerateText
# tool-request-parser => (WhateverCode)
# name => palm
# tools => []
# total-probability-cutoff => 0
# api-user-id => user:235041118372
# temperature => 0.4
# max-tokens => 300
# prompt-delimiter =>
# prompts => []
# examples => []
# argument-renames => {api-key => auth-key, max-tokens => max-output-tokens, stop-tokens => stop-sequences}
Basic usage of LLM functions
Textual prompts
Here we make a LLM function with a simple (short, textual) prompt:
my &func = llm-function('Show a recipe for:');
# -> $text, *%args { #`(Block|2515004761376) ... }
Here we evaluate over a message:
say &func('greek salad');
# Greek Salad
#
# Ingredients:
#
# - 2 large tomatoes, diced
#
# - 1 cucumber, diced
#
# - 1/2 red onion, diced
#
# - 1/3 cup kalamata olives, pitted and halved
#
# - 1/2 cup crumbled feta cheese
#
# - 1/4 cup extra virgin olive oil
#
# - 2 tablespoons lemon juice
#
# - 1 tablespoon oregano
#
# - salt and pepper to taste
#
# Instructions:
#
# 1. In a large bowl, combine the tomatoes, cucumber, onion, and olives.
#
# 2. In a small bowl, whisk together the olive oil, lemon juice, oregano, salt, and pepper.
#
# 3. Pour the dressing over the vegetables and toss to combine.
#
# 4. Sprinkle the feta cheese over the top.
#
# 5. Serve immediately, or refrigerate for up to two days. Enjoy!
Positional arguments
Here we make a LLM function with a function-prompt and numeric interpreter of the result:
my &func2 = llm-function(
{"How many $^a can fit inside one $^b?"},
form => Numeric,
llm-evaluator => 'palm');
# -> **@args, *%args { #`(Block|2515008657592) ... }
Here were we apply the function:
my $res2 = &func2("tenis balls", "toyota corolla 2010");
# 110
Here we show that we got a number:
$res2 ~~ Numeric
# False
Named arguments
Here the first argument is a template with two named arguments:
my &func3 = llm-function(-> :$dish, :$cuisine {"Give a recipe for $dish in the $cuisine cuisine."}, llm-evaluator => 'palm');
# -> **@args, *%args { #`(Block|2515008662848) ... }
Here is an invocation:
&func3(dish => 'salad', cuisine => 'Russion', max-tokens => 300);
# **Russian Salad (Olivier Salad)**
#
# Ingredients:
#
# * 2 pounds (900g) potatoes, peeled and cubed
# * 1 pound (450g) carrots, peeled and cubed
# * 1 pound (450g) celery root, peeled and cubed
# * 1 pound (450g) green beans, trimmed and blanched
# * 1 pound (450g) red onion, diced
# * 1 pound (450g) ham, diced
# * 1 pound (450g) hard-boiled eggs, peeled and diced
# * 1 cup (240ml) mayonnaise
# * 1/2 cup (120ml) sour cream
# * 1/4 cup (60ml) finely chopped fresh dill
# * Salt and pepper to taste
#
# Instructions:
#
# 1. In a large bowl, combine the potatoes, carrots, celery root, green beans, red onion, ham, and eggs.
# 2. In a small bowl, whisk together the mayonnaise, sour cream, dill, salt, and pepper.
# 3. Pour the dressing over the salad and toss to coat.
# 4. Serve immediately or chill for later.
#
# **Tips:**
#
# * To make the dressing ahead of time, whisk together the mayonnaise, sour cream, dill, salt, and pepper in a small
LLM example functions
The function llm-example-function
can be given a training set of examples in order
to generating results according to the "laws" implied by that training set.
Here a LLM is asked to produce a generalization:
llm-example-function([ 'finger' => 'hand', 'hand' => 'arm' ])('foot')
# leg
Here is an array of training pairs is used:
'Oppenheimer' ==> (["Einstein" => "14 March 1879", "Pauli" => "April 25, 1900"] ==> llm-example-function)()
# April 22, 1904
Here is defined a LLM function for translating WL associations into Python dictionaries:
my &fea = llm-example-function( '<| A->3, 4->K1 |>' => '{ A:3, 4:K1 }');
&fea('<| 23->3, G->33, T -> R5|>');
# { 23:3, G:33, T:R5 }
The function llm-example-function
takes as a first argument:
- Single
Pair
object of two scalars - Single
Pair
object of two Positional
objects with the same length - A
Hash
- A
Positional
object of pairs
Remark: The function llm-example-function
is implemented with llm-function
and suitable prompt.
Here is an example of using hints:
my &fec = llm-example-function(
["crocodile" => "grasshopper", "fox" => "cardinal"],
hint => 'animal colors');
say &fec('raccoon');
# skunk
Using chat-global prompts
The configuration objects can be given prompts that influence the LLM responses
"globally" throughout the whole chat. (See the second sequence diagram above.)
For detailed examples see the documents:
Chat objects
Here we create chat object that uses OpenAI's ChatGPT:
my $prompt = 'You are a gem expert and you give concise answers.';
my $chat = llm-chat(chat-id => 'gem-expert-talk', conf => 'ChatGPT', :$prompt);
# LLM::Functions::Chat(chat-id = gem-expert-talk, llm-evaluator.conf.name = chatgpt, messages.elems = 0)
$chat.eval('What is the most transparent gem?');
# The most transparent gem is typically considered to be diamond.
$chat.eval('Ok. What are the second and third most transparent gems?');
# The second most transparent gem is usually considered to be sapphire, and the third most transparent gem is generally considered to be spinel.
Here are the prompt(s) and all messages of the chat object:
$chat.say
# Chat: gem-expert-talk
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# Prompts: You are a gem expert and you give concise answers.
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => What is the most transparent gem?
# timestamp => 2023-08-14T16:36:49.221171-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => The most transparent gem is typically considered to be diamond.
# timestamp => 2023-08-14T16:36:50.528752-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => user
# content => Ok. What are the second and third most transparent gems?
# timestamp => 2023-08-14T16:36:50.548841-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role => assistant
# content => The second most transparent gem is usually considered to be sapphire, and the third most transparent gem is generally considered to be spinel.
# timestamp => 2023-08-14T16:36:51.844316-04:00
Potential problems
With PaLM with certain wrong configuration we get the error:
error => {code => 400, message => Messages must alternate between authors., status => INVALID_ARGUMENT}
TODO
- TODO Resources
- TODO Gather prompts
- TODO Process prompts into a suitable database
- TODO Implementation
- TODO Processing and array of prompts as a first argument
- TODO Prompt class
- For retrieval and management of prompts.
- Prompts can be both plain strings or templates / functions.
- Each prompt has associated metadata:
- Type: persona, function, modifier
- Tool/parser
- Keywords
- Contributor?
- Topics: "Advisor bot", "AI Guidance", "For Fun", ...
- Most likely, there would be a separate package "LLM::Prompts".
- And/or "LLM::Prompts::Repository".
- MAYBE Random selection of LLM-evaluator
- Currently, the LLM-evaluator of the LLM-functions and LLM-chats is static, assigned at creation.
- This is easily implemented at "top-level."
- DONE Chat class / object
- DONE LLM example function
- DONE First version with the signatures:
@pairs
@input => @output
- Hint option
- DONE Verify works with OpenAI
- DONE Verify works with PaLM
- DONE Interpreter argument for
llm-function
- See the
formatron
attribute of LLM::Functions::Evaluator
.
- DONE Adding
form
option to chat objects evaluator
- TODO CLI
- TODO Based on Chat objects
- TODO Storage and retrieval of chats
- TODO Has as parameters all attributes of the LLM-configuration objects.
- TODO Documentation
- TODO Detailed parameters description
- TODO Configuration
- TODO Evaluator
- TODO Chat
- DONE Using engineered prompts
- DONE Expand tests in documentation examples
- DONE Conversion of a test file tests into Gherkin specs
- TODO Using retrieved prompts
- TODO Longer conversations / chats
- TODO Number game programming
- TODO Man vs Machine
- TODO Machine vs Machine
References
Articles
[AA1] Anton Antonov,
"Generating documents via templates and LLMs",
(2023),
RakuForPrediction at WordPress.
[ZG1] Zoubin Ghahramani,
"Introducing PaLM 2",
(2023),
Google Official Blog on AI.
Repositories, sites
[OAI1] OpenAI Platform, OpenAI platform.
[WRIr1] Wolfram Research, Inc.
Wolfram Prompt Repository.
Packages, paclets
[AAp1] Anton Antonov,
LLM::Functions Raku package,
(2023),
GitHub/antononcube.
[AAp2] Anton Antonov,
WWW::OpenAI Raku package,
(2023),
GitHub/antononcube.
[AAp3] Anton Antonov,
WWW::PaLM Raku package,
(2023),
GitHub/antononcube.
[AAp4] Anton Antonov,
Text::SubParsers Raku package,
(2023),
GitHub/antononcube.
[AAp5] Anton Antonov,
Text::CodeProcessing Raku package,
(2021),
GitHub/antononcube.
[AAp6] Anton Antonov,
ML::FindTextualAnswer Raku package,
(2023),
GitHub/antononcube.
[AAp7] Anton Antonov,
ML::NLPTemplateEngine Raku package,
(2023),
GitHub/antononcube.
[WRIp1] Wolfram Research, Inc.
LLMFunctions paclet,
(2023),
Wolfram Language Paclet Repository.