Use this page to create and manage the AI Classification prompts that will be sent to your selected AI model when a message matches one of your AI Classification Rules. You can add, edit, and delete your prompts, and view each prompt's configuration details. The Prompt List displays an Enabled column, for enabling/disabling a prompt, a Prompt Name column, and a column showing which AI Model the prompt uses.
To add a new AI prompt to the Prompts list, click New on the Prompts page toolbar. To edit an existing prompt, select the prompt's entry and click Edit.
Properties
Enabled
Check this box to enable the selected prompt for use.
Name:
Assign a unique name to your prompt.
Description:
Use this box for a description of your prompt. This is optional and only for your reference.
AI Model:
Use this drop-down list to choose which preconfigured AI Model will receive this prompt.
Prompt Text:
This is the text that will be sent to the selected AI Model when a message matches one of the AI Classification Rules associated with this prompt. The prompt must ask the model to return one of the Allowed Classification Labels specified below for each message analyzed, optionally followed by some additional text after a comma. The labels mentioned in the prompt must exactly match the labels you have specified in that option, otherwise messages may not be classified properly. There are a number of variables available for use in the prompt to control exactly which message data gets included in that prompt. Click View all available variables for a list of those variables. See Sample Prompts below for two examples of acceptable prompts, with explanations for each.
View all available variables
Click this button to display a list of all variables that are allowed in your prompts. Variables must be enclosed in curly braces: i.e. {variable_name}. To set a maximum number of characters allowed by a variable, use: {variable_name,max_chars}. For example: {body.text,50000} would limit the size of the text that replaces the variable to 50,000 characters. Any text beyond that would be truncated.
Variable |
Description |
{classification_labels} |
The allowed classification labels |
{remote_ip} |
Remote client IP address |
{remote_ip.ptr} |
PTR (reverse DNS) record |
{ehlo_domain} |
EHLO/HELO domain |
{ehlo_domain.ptr} |
EHLO/HELO domain (reverse DNS) |
{env_from} |
Envelope sender address |
{subject} |
Email subject |
{body.text} |
Plain text message body |
{body.html} |
HTML message body |
{attachments} |
List of attachments |
{headers} |
Message headers (decoded) |
{headers.raw} |
Message headers (raw) |
{message.raw} |
Full raw message data (RFC 5322) |
View example prompt
Click this button to see an example prompt, which can be used as a template or starting point for creating an AI Classification prompt. On that page, click the Use This Template button to copy the example prompt into the Prompt Text box below. See: Sample Prompts, for more information about how to construct a prompt.
Allowed Classification Labels:
These are the classification labels that you will prompt the AI to assign to the messages that are analyzed. If you manually define the classification labels in the prompt itself, rather than simply using the {classification_labels} variable, then they must match these labels exactly. If the labels in the prompt do not match the labels specified here, the messages will not be classified properly.
Below are two sample AI Classification prompts. The first is simple and the second is more complex. Tip: AI tools like ChatGPT can be helpful in suggesting and refining prompts for various classification tasks.
Prompt Sample #1:
With this prompt, the AI model determines, completely on its own, the definitions of the listed classifications. If, for example, your classifications are LEGITIMATE, SPAM, and SUSPICIOUS, then it is left to the model to determine what constitutes "spam," what makes a message "legitimate," and what details it should consider to be "suspicious." This sample also uses the {classification_labels} variable to include your Allowed Classification Labels automatically. Note that it also instructs the model to respond with "exactly one" of the labels, "followed by a comma and the explanation." For AI Classification, the model must return only ONE of the classification labels, and optionally some additional text after a comma, such as an explanation for why it assigned that classification to the message. Finally, variables are used to include the data to analyze: the message's headers, up to 50,000 characters of the message's html body, and a list of any included attachments.
--
Analyze this email and classify it based on content. Respond with exactly one of: {classification_labels}, followed by a comma and the explanation.
Headers: {headers}
Body: {body.html,50000}
Attachments: {attachments}
Prompt Sample #2:
In this prompt, each of the classifications is defined in the prompt; it isn't left to the model to determine what each classification means. It also tells the model to respond with only one classification, or category, followed by a comma and then the explanation. Telling the model to explain why it chose the classification can sometimes yield better results, reducing the possibility of hallucinations or illogical answers. Further, that extra information is logged, which could potentially help you to troubleshoot a problem or refine the prompt. Finally, variables are used to include the message's headers and up to 50,000 characters of the message body.
--
You are an email classification assistant. Your task is to read an email and classify it into one of the following categories:
LEGITIMATE: Personal, work-related, transactional, or expected emails from trusted sources (e.g., receipts, order confirmations, service updates).
UNSOLICITED: Unrequested emails that are not clearly harmful or selling something, such as newsletters, surveys, or random contact attempts.
COMMERCIAL: Emails promoting or selling a product or service, including advertisements, offers, and marketing campaigns.
HARMFUL: Emails that appear to be phishing attempts, contain malware, impersonate known brands or services to steal information, involve scams, or have malicious intent. Use this label only if there is clear evidence of deception, fraud, or threat. Emails from well-known companies that are DMARC aligned should not be marked HARMFUL unless they show signs of impersonation or malicious content (e.g., mismatched links, suspicious attachments, urging immediate login via unknown URLs).
Respond with only the one category followed by a comma and the explanation.
Headers: {headers}
Body: {body.text,50000}