# AI functionality

Since version 6, JabRef has AI functionality built in.

* AI can generate a summary of a research paper
* You can also chat with papers using a "smart" AI assistant

## AI summary tab

When you activate this tab, AI will generate a quick overview of the paper for you.

![AI summary tab screenshot](/files/seUjF3YU2zuRvrnHRIlu)

The AI will mention the main objectives of the research, methods used, key findings, and conclusions.

## AI chat tab

Here, you can ask questions, which are answered by the LLM.

![AI chat tab screenshot](/files/7IcL6NNueAH4lNQIrfpt)

In this window, you can see the following elements:

* Chat history with your messages
* Prompt for sending messages
* A button for clearing the chat history (just in case)

## How does the AI functionality work?

JabRef uses external AI providers to do the actual work. You can choose between various providers. They all run "Large Language Models" (LLMs) to process the requests and need chunks of text to work. For this, JabRef parses and indexes linked PDF files of entries: The file is split into parts of fixed-length (so-called *chunks*) and for each of them, an *embedding* is generated. An embedding itself is a representation of a part of text and in turn a vector that represents the meaning of the text. Each vector has a crucial property: texts with similar meaning have vectors that are close to (so-called *vector similarity*). As a result, whenever you ask AI a question, JabRef tries to find relevant pieces of text from the indexed files using vector similarity and provides those to the LLM system to be processed.

## More information

{% content-ref url="/pages/B7T0cJM5FoJnhQh4HNBc" %}
[How to enable and use AI features?](/ai/how-to-enable-and-use-ai-features.md)
{% endcontent-ref %}

{% content-ref url="/pages/uDpUSAEGMIONdfNGyKZw" %}
[AI providers and API keys](/ai/ai-providers-and-api-keys.md)
{% endcontent-ref %}

{% content-ref url="/pages/SPk0SIimpT66Htuf6RE2" %}
[AI troubleshooting](/ai/troubleshooting.md)
{% endcontent-ref %}

{% content-ref url="/pages/hQzGh4FLdVCDPbdnwRjU" %}
[AI preferences](/ai/preferences.md)
{% endcontent-ref %}

{% content-ref url="/pages/YT14vtZFVrAAIxGK3CWL" %}
[Running a local language model](/ai/local-llm.md)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.jabref.org/ai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
