Search

Understanding Prompts

The Principles of OOC Service

Tokens and Prompts

1.
Tokens
A Token is one of the core concepts of AI language models and is the most basic unit by which generative AI understands and creates text.
For example, if there is a sentence like <OOC is good!>, it is split into a total of 4 tokens. How is it divided? It is separated into four parts: OOC, is, good, and !.
As you can see here, you can easily understand tokens by thinking of them as a human's unit of words. Special characters are also included in tokens.
2.
Prompts
You can think of a Prompt as an 'instruction or command that directs the AI to generate a response.'
OOC's prompts consist of three main types
System Prompt
Depending on the selected prompt template, this includes prompts that guide the character's rules or response style so that it can generate answers for a specific purpose.
Character Prompt
refers to the text entered by the character creator in the character settings and information. It supports input of up to 3,000 characters.
Please remember that the "One-line Introduction" entered during the profile stage is text that is NOT delivered to the AI.
Feature Prompt
This includes prompts entered in Example Dialogues, Media (Situation Images), Intro Settings (Starting Situation), and Keyword Book (Keywords and Information).
In the case of the Keyword Book, prompts are activated conditionally; otherwise, they are always included in the prompt.
We are continuously optimizing the prompts for each feature internally, and newly released features will also be provided as various feature prompts.

How Character Chat Works

OOC generates character responses by delivering the "Prompt Trio" (System, Character, and Feature Prompts) along with the conversation history between the user and the character to the AI model.
In other words, when a user's utterance is input to the AI, it's not just that specific utterance being sent; rather, it is a structure where <The Prompt Trio and all previous conversation history are bundled together>.
The System Prompt maintains the overall structure of the prompt, and the Character and Feature Prompts are injected at appropriate positions within the System Prompt to manage the generation of natural character responses.
Additionally, to help the AI remember the conversation for as long as possible, if a summary of the previous conversation exists, that summary is also passed to the AI model to generate the response. (Details regarding the summary will be described later)

Limitations and Overcoming Them

Overcoming the "Token Input Limit"

As explained above, as the conversation gets longer, the conversation history builds up, causing the input tokens to increase. However, since current AI models have a maximum token (character) limit they can support, it is impossible to deliver every single word of the conversation history to the AI model.
To solve this, when the conversation becomes too long, we have built a "Long-term Memory System" that summarizes previous conversations and includes that summary when generating a response. This system will be continuously improved considering various characters and situations. We plan to advance the system so it can summarize according to character traits (e.g., prioritizing the relationship in Roleplaying, or the user's status and settings in Simulations).

"AI's Long-Context Understanding" Limitations and Solutions

AI models sometimes struggle to utilize context located in the middle of a long string of text. This means that even if there is text the AI explicitly needs to remember in the prompt, the character's response might "forget" it.
This phenomenon starts to appear when the total prompt length exceeds a certain token threshold. Due to the nature of character chats, the conversation history usually sits in the middle of the context, leading to memory limitations. You might have two questions about this:
1.
If it happens above a certain token count, isn't it better to limit it below that?
2.
Can't we just make sure the conversation history isn't located in the middle?
To answer this, we want to highlight a key feature of OOC. OOC supports up to 3,000 characters for Character Prompts. We also provide high-quality characters based on long settings and responses, such as simulations. According to our internal research, this environment can reach that "certain token threshold" quite quickly.
However, we'd rather work harder ourselves than give up on OOC’s charm! Therefore, we set our limits slightly higher than that specific threshold and strive to achieve maximum memory retention through the Long-term Memory System. We are exploring ways to improve this by continuously monitoring the data (e.g., the optimal memory system according to the number of tokens in a Character Prompt).
(Tip!) While the maximum limit for a Character Prompt is 3,000 characters, not forcing yourself to fill all 3,000 characters can actually be better for the AI's memory. (Regardless, we will do our best to ensure the best memory retention no matter how many characters you use!)

FAQ

Q: The character doesn't remember previous conversations! A: First of all, as mentioned above, the issues of tokens and memory have not been fully overcome yet due to the limitations of current AI model technology. However, apart from this, we are working hard to improve memory through summary systems, so please keep this in mind!
Q: I don't like the character's tone / I want to set formal or informal speech. A: The character's tone may vary slightly depending on the character's detailed settings. Also, since the character changes in real-time based on the chat content, the tone can change dynamically to match the conversation.
Q: When creating a character, how do I specify an image or refer to the user in a specific situation? A: This is possible using the {{user}} and {{img: ImageName}} tags! For example, if you want to set a character to like the user they are talking to → ‘...likes {{user}}'. If you want a specific image to appear when the character likes the user → ‘When ... likes **{{user}}’, {{img: ImageName}} is displayed.