EnterpriseThought Leadership

AI and Knowledge: does it learn? How can I give it my entire project as context?

miguel in summer clothes standing in front of a beautiful building Miguel Del Amor Jun 5, 2025

I believe this is a concept we all must clearly understand: Large Language Models (LLMs) do not inherently have the ability to “know your context” or “have your full project” stored internally. Instead, they’re using specific techniques to enrich the queries you send them.


When you train an LLM, you generate internal weights that allow the model to think and respond in specific ways. Fine-tuning adjusts these weights further to alter behavior. This results in implicit knowledge: during training, the LLM condenses statistical patterns from vast datasets into neural weights, enabling it to produce relevant facts or answers. However, this implicit knowledge is static, it doesn’t update automatically after training.

The capacity to “learn” dynamically from new, changing data is handled externally. That additional context is stored separately and passed along with every query as extra context. Another layer of software prepares the prompts by injecting this relevant contextual information before it reaches the LLM.

A clear example of this approach, which you can even explore by checking out its open-source code, is Continue.dev. Continue automatically assembles context from your current file, recent conversations, and related past information retrieved using semantic search, powered by a vector database (ChromaDB). Semantic search allows Continue.dev to quickly recall relevant previous interactions or files based on meaning rather than exact wording. It then sends this assembled context to the LLM, ensuring precise, relevant, and useful responses.

The actual message sent to the LLM is a prompt clearly explaining the context and purpose in plain text. The LLM then processes this exactly as if you had manually copied and pasted all relevant information. If you use Ollama integrated with Continue.dev, you can directly see these prompts being sent and verify what I’m explaining here.

It’s not as simple as merely setting up a vector database, querying it, and forwarding results to the LLM. There are numerous advanced techniques for context splitting, RAG, agentic RAG, and more. However, the fundamental principle remains unchanged.

𝗨𝗹𝘁𝗶𝗺𝗮𝘁𝗲𝗹𝘆, 𝘁𝗵𝗲 𝗺𝗮𝗴𝗶𝗰 𝗶𝘀𝗻’𝘁 𝗷𝘂𝘀𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹; 𝗶𝘁’𝘀 𝗶𝗻 𝗵𝗼𝘄 𝗰𝗹𝗲𝘃𝗲𝗿𝗹𝘆 𝘆𝗼𝘂 𝗳𝗲𝗲𝗱 𝗶𝘁 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗮𝘁 𝗲𝘅𝗮𝗰𝘁𝗹𝘆 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗺𝗼𝗺𝗲𝗻𝘁.

At DisplayNote, we’re embracing this principle.

We’re actively working with AI tools, not just exploring their capabilities, but also applying best practices around prompt design, context management, and real-world testing. Our goal is to understand where these technologies add genuine value, and to integrate them thoughtfully into our workflows and products. By experimenting, refining, and sharing what we learn, we’re helping shape an approach to AI that’s practical, responsible, and grounded in real outcomes.

Want to stay in the loop?

Keep up-to-date with everything DisplayNote – including new releases, job openings, and customer giveaways.

You are currently unable to join our mailing list - if you would like to get in touch please check out our contact page.

Don’t worry, we’ll not spam you and we’ll never share your email with anyone