Please enable JavaScript to view this site.

SecurityGateway for Email Servers v11.0

Navigation: AI Classification

AI Classification Models

Scroll Prev Top Next More

Use this page to manage the list of AI models that you will use to analyze your selected messages. You can add, edit, or remove models, and view their configuration details such as name, endpoint URL, AI profile, timeout, and API key. You can choose to use cloud-based AI models and your own locally hosted models.

Cloud-Based AI Models

Here are some things to consider when choosing a cloud-based AI model.

Cost and Testing Considerations

Cost is a valid concern when using cloud-based AI models, therefore you should test various models to see which would work best for you. You may find the cost of using the models recommended below to be quite manageable, especially when AI Rules are configured to restrict which messages are sent for classification. For example, you could choose to use AI Classification only for messages addressed to certain users, such as those whom you think may be most vulnerable to phishing attempts.

OpenAI API — OpenAI may offer a certain amount of free tokens for new accounts. For more extensive testing or if you agree to their data usage policies (which might involve allowing them to use your data for training purposes), you might find options for reduced-cost or free testing. Always review the current terms of service for any AI provider.

Google Cloud AI — With a free Google Cloud account, Google often provides a limited number of free requests as part of their free tier, which can be useful for initial testing and familiarization.

Recommended Cloud-Based AI Models

For the best balance of capability and cost-effectiveness, considering the following models:

1.OpenAI:

gpt-4.1-mini — Often a good balance of performance and cost for general tasks.

gpt-4.1A more powerful option, potentially better for complex analysis, but may incur higher costs.

2.Google Gemini:

gemini-2.0-flashDesigned for speed and efficiency.

Local AI Models

Local AI models that support the OpenAI API are also an option if sending data to a third party is not feasible. Some platforms that facilitate this include:

LocalAI

Ollama (OpenAI Compatibility)

With local models, performance will be a concern, especially if you don't have a dedicated GPU or AI accelerator hardware. Local models are also typically less capable than the leading cloud models at nuanced tasks like accurately detecting if a message is phishing or purely commercial. However, they can be very effective for other use cases, such as tagging messages based on specific content patterns you define.

Adding or Editing an AI Model

To add a new AI model to the Models list, click New on the Models page toolbar. To edit an existing model's entry, select the entry and click Edit.

Properties

Profile:

Select a built-in AI profile, or choose Custom to enter your own settings when using a different AI model.

Display Name:

Enter a name for this AI model's entry; this name is just for your reference. For the predefined profiles, a generic name is added for you, which can be changed if you prefer.

Endpoint URL:

This is the specific web address where the AI model's API can be accessed to send requests and receive responses. In most cases this URL will be automatically added for you when you select a Profile above.

API Key:

Use this option to provide any necessary API key obtained from your chosen AI service provider.

Model Name:

Enter a model name or click Fetch List to produce a drop-down list of available models.

Fetch List

Click this button to produce a drop-down list of available AI models. Note: You must first enter an Endpoint URL and API Key to use the Fetch List option.

Timeout (seconds):

This is the number of seconds the server will wait for an AI response before giving up.

Default Temperature:

AI Classification's default temperature is set to 1.0. The temperature parameter controls the randomness of the AI responses, on a scale of 0 to 2. A temperature of 0.1, for example, would be very narrow and deterministic, which could cause incorrect classifications due to the extreme level of rigidness. Conversely, a setting of 2.0 would be highly random and could produce strange, unrelated, or even nonsensical results. In most cases you should leave this set to the default value of 1.0. If you are regularly experiencing incorrect classifications, you should first try making adjustments to your Prompts rather than the temperature setting.

Allow Invalid SSL Certificates

When using a Local or Custom Profile, check this box if you wish to allow Invalid SSL Certificates.

Additional HTTP Request Headers:

If you need to include any additional HTTP request headers, enter those here.

Test Connection

Once your AI model is configured, click Test Connection on the top of the dialog to make sure your server is connecting to and getting a response from the configured AI model.