How to create AI bots that are resistant to attacks | All for One Poland

How to create AI bots that are resistant to attacks

Cyber-secure AI assistants

Imagine that a company's AI assistant - designed to support sales - after a brief conversation with a customer independently gives him a huge discount and "sells" the product for a fraction of the price. Sound like fiction? Such situations have already happened. Artificial intelligence in business is a huge potential, but also a new field of risk. Therefore, the security of AI assistants must grow at the same pace as their capabilities.

Imagine that a company's AI assistant - designed to support sales - after a brief conversation with a customer independently gives him a huge discount and "sells" the product for a fraction of the price. Sound like fiction? Such situations have already happened. Artificial intelligence in business is a huge potential, but also a new field of risk. Therefore, the security of AI assistants must grow at the same pace as their capabilities.

Based on our experience of building AI assistants, here are the assumptions and principles we follow to make our AI bots not only effective and helpful, but also safe: they work in accordance with the organization’s processes and are resistant to abuse.

Properly prepared knowledge base

The foundation of any AI assistant is its knowledge base. Often in companies it is scattered in various sources – documents of different format, databases or other sources. Usually their form makes it difficult for AI to relate the content contained in different paragraphs or understand relationships in tabular data.

At All for One, based on experience, we have built a suitable data formatter to semi-automatically convert such documents into a form that AI can understand, so that the assistant can answer questions on behalf of the company with consistency and certainty of information.

Secure models suitable for Enterprise

Inserting sensitive data or data related to company secrets into language models always involves risk, such data can be intercepted or made public due to data leakage, threats from errors or attacks on the model provider. That’s why it’s important to understand the type of data our user will be operating on when interacting with AI, and design an architecture that matches the required level of protection.

At All for One, we have access to a wide range of platforms and tools, which allows us to select customized solutions.

Modular multi-agent architecture

Our assistants are not just a combination of language model and interface. It’s a set of components that have corresponding roles, such as:

  • LLM module for natural language understanding,
  • database module,
  • output formatting module.

With this architecture, we can easily manage and change selected components to optimize costs and achieve the right level of stability and protection.

AI assistants are not just a combination of a language model and an interface. It is a set of components that have corresponding roles

Kamila Malanowicz-Bulinskaya, R&D Manager, All for One Poland, Enterprise Software House

Module to eliminate unwanted context

This is one of the key elements of any of our assistants. This module:

  • blocks the possibility of malicious token consumption,
  • protects against prompt injection attacks,
  • Prevents conversations beyond knowledge base topics.

In practice, this means that the assistant answers only those questions for which it was created.

Continuous quality control process

The production-launched AI assistant is periodically and automatically tested and monitored for:

  • stability,
  • percentage of correct answers,
  • speed of operation.

This approach ensures that any anomaly can be immediately flagged and forwarded to the appropriate people, who, given its criticality, can decide whether to intervene.

Individual approach and comprehensive action

ChatGPT-type tools seem like a ready-made product that can be deployed in a company and made available to employees or customers. In reality, it is only by understanding the risks, opportunities and specifics of the organization that it becomes clear how many additional components and safeguards we need to include.

In order to protect the company from prompt injection attacks, users from hallucinations, and to provide the best possible User Experience, a comprehensive approach is necessary – the kind we use in our projects.

Write us Call us Send email






    Details regarding the processing of personal data are available in the Privacy Policy.


    +48 61 827 70 00

    The office is open
    Monday to Friday
    from 8am to 4pm (CET)

    General contact for the company
    office.pl@all-for-one.com

    Question about products and services
    info.pl@all-for-one.com

    Question about work and internships
    kariera@all-for-one.com

    This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.