AI functionality
Last updated
Last updated
Since version 6, JabRef has AI functionality built in.
AI can generate a summary of a research paper
You can also chat with papers using a "smart" AI assistant
When you activate this tab, AI will generate a quick overview of the paper for you.
The AI will mention the main objectives of the research, methods used, key findings, and conclusions.
Here, you can ask questions, which are answered by the LLM.
In this window, you can see the following elements:
Chat history with your messages
Prompt for sending messages
A button for clearing the chat history (just in case)
JabRef uses external AI providers to do the actual work. You can choose between OpenAI, Mistral AI, and Hugging Face. They all run "Large Language Models" (LLMs) to process the requests. The AI providers need chunks of text to work. For this, JabRef parses and indexes linked PDF files of entries: The file is split into parts of fixed-length (so-called chunks) and for each of them, an embedding is generated. An embedding itself is a representation of a part of text and in turn a vector that represents the meaning of the text. Each vector has a crucial property: texts with similar meaning have vectors that are close to (so-called vector similarity). As a result, whenever you ask AI a question, JabRef tries to find relevant pieces of text from the indexed files using vector similarity.