{"id":947,"date":"2024-06-20T11:13:31","date_gmt":"2024-06-20T11:13:31","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2024\/06\/20\/build-your-own-ai-driven-code-analysis-chatbot-for-developers-with-the-genai-stack\/"},"modified":"2024-06-20T11:13:31","modified_gmt":"2024-06-20T11:13:31","slug":"build-your-own-ai-driven-code-analysis-chatbot-for-developers-with-the-genai-stack","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2024\/06\/20\/build-your-own-ai-driven-code-analysis-chatbot-for-developers-with-the-genai-stack\/","title":{"rendered":"Build Your Own AI-Driven Code Analysis Chatbot for Developers with the GenAI Stack"},"content":{"rendered":"<p>The topic of GenAI is everywhere now, but even with so much interest, many developers are still trying to understand what the real-world use cases are. Last year, Docker hosted an <a href=\"https:\/\/docker.devpost.com\/\" target=\"_blank\" rel=\"noopener\">AI\/ML Hackathon<\/a>, and genuinely interesting projects were submitted.\u00a0<\/p>\n<p>In this <a href=\"https:\/\/www.docker.com\/blog\/tag\/ai-ml-hackathon\/\" target=\"_blank\" rel=\"noopener\">AI\/ML Hackathon post<\/a>, we will dive into a winning submission, <a href=\"https:\/\/github.com\/dockersamples\/codeExplorer\" target=\"_blank\" rel=\"noopener\">Code Explorer<\/a>, in the hope that it sparks project ideas for you.\u00a0<\/p>\n<p>For developers, understanding and navigating codebases can be a constant challenge. Even popular AI assistant tools like ChatGPT can fail to understand the context of your projects through code access and struggle with complex logic or unique project requirements. Although large language models (LLMs) can be valuable companions during development, they may not always grasp the specific nuances of your codebase. This is where the need for a deeper understanding and additional resources comes in.<\/p>\n<p>Imagine you\u2019re working on a project that queries datasets for both cats and dogs. You already have functional code in DogQuery.py that retrieves dog data using pagination (a technique for fetching data in parts). Now, you want to update CatQuery.py to achieve the same functionality for cat data. Wouldn\u2019t it be amazing if you could ask your AI assistant to reference the existing code in DogQuery.py and guide you through the modification process?\u00a0<\/p>\n<p>This is where <a href=\"https:\/\/github.com\/dockersamples\/codeExplorer\" target=\"_blank\" rel=\"noopener\">Code Explorer<\/a>, an AI-powered chatbot comes in.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">What makes Code Explorer unique?<\/h2>\n<p>The following demo, which was submitted to the AI\/ML Hackathon, provides an overview of Code Explorer (Figure 1).<\/p>\n<div class=\"wp-block-embed__wrapper\">\n<\/div>\n<p><strong>Figure 1: <\/strong>Demo of the Code Explorer extension as submitted to the AI\/ML Hackathon.<\/p>\n<p>Code Explorer helps you find answers about your code by searching relevant information based on the programming language and folder location. Unlike chatbots, Code Explorer goes beyond generic coding knowledge. It leverages a powerful AI technique called retrieval-augmented generation (RAG) to understand your code\u2019s specific context. This allows it to provide more relevant and accurate answers based on your actual project.<\/p>\n<p>Code Explorer supports a variety of programming languages, such as *.swift, *.py, *.java, *.cs, etc. This tool can be useful for learning or debugging your code projects, such as Xcode projects, Android projects, AI applications, web dev, and more.<\/p>\n<p>Benefits of the CodeExplorer include:<\/p>\n<p><strong>Effortless learning<\/strong>: Explore and understand your codebase more easily.<\/p>\n<p><strong>Efficient debugging<\/strong>: Troubleshoot issues faster by getting insights from your code itself.<\/p>\n<p><strong>Improved productivity<\/strong>: Spend less time deciphering code and more time building amazing things.<\/p>\n<p><strong>Supports various languages<\/strong>: Works with popular languages like Python, Java, Swift, C#, and more.<\/p>\n<p>Use cases include:<\/p>\n<p><strong>Understanding complex logic<\/strong>: \u201cExplain how the calculate_price function interacts with the get_discount function in billing.py.\u201d<\/p>\n<p><strong>Debugging errors<\/strong>: \u201cWhy is my getUserData function in user.py returning an empty list?\u201d<\/p>\n<p><strong>Learning from existing code<\/strong>: \u201cHow can I modify search.py to implement pagination similar to search_results.py?\u201d<\/p>\n<h2 class=\"wp-block-heading\">How does it work?<\/h2>\n<p>Code Explorer leverages the power of a RAG-based AI framework, providing context about your code to an existing LLM model. Figure 2 shows the magic behind the scenes.<\/p>\n<p><a href=\"https:\/\/www.docker.com\/wp-content\/uploads\/2024\/06\/F2-Code-Explorer-diagram.png\" target=\"_blank\" rel=\"noopener\"><\/a><strong>Figure 2:<\/strong> Diagram of Code Explorer steps.<\/p>\n<h2 class=\"wp-block-heading\">Step 1. Process documents<\/h2>\n<p>The user selects a codebase folder through the Streamlit app. The process_documents function in the file db.py is called. This function performs the following actions:<\/p>\n<p><strong>Parsing code:<\/strong> It reads and parses the code files within the selected folder. This involves using language-specific parsers (e.g., ast module for Python) to understand the code structure and syntax.<\/p>\n<p><strong>Extracting information:<\/strong> It extracts relevant information from the code, such as:<\/p>\n<p>Variable names and their types<\/p>\n<p>Function names, parameters, and return types<\/p>\n<p>Class definitions and properties<\/p>\n<p>Code comments and docstrings<\/p>\n<p><strong>Documents are loaded and chunked:<\/strong> It creates a RecursiveCharacterTextSplitter object based on the language. This object splits each document into smaller chunks of a specified size (5000 characters) with some overlap (500 characters) for better context.<\/p>\n<p><strong>Creating Neo4j vector store:<\/strong> It creates a Neo4j vector store, a type of database that stores and connects code elements using vectors. These vectors represent the relationships and similarities between different parts of the code.<\/p>\n<p>Each code element (e.g., function, variable) is represented as a node in the Neo4j graph database.<\/p>\n<p>Relationships between elements (e.g., function call, variable assignment) are represented as edges connecting the nodes.<\/p>\n<h2 class=\"wp-block-heading\">Step 2. Create LLM chains<\/h2>\n<p>This step is triggered only after the codebase has been processed (Step 1).<\/p>\n<p>Two LLM chains are created:<\/p>\n<p><strong>Create Documents QnA chain:<\/strong> This chain allows users to talk to the chatbot in a question-and-answer style. It will refer to the vector database when answering the coding question, referring to the source code files.<\/p>\n<p><strong>Create Agent chain:<\/strong> A separate Agent chain is created, which uses the QnA chain as a tool. You can think of it as an additional layer on top of the QnA chain that allows you to communicate with the chatbot more casually. Under the hood, the chatbot may ask the QnA chain if it needs help with the coding question, which is an AI discussing with another AI the user\u2019s question before returning the final answer. In testing, the agent appears to summarize rather than give a technical response as opposed to the QA agent only.<\/p>\n<p>Langchain is used to orchestrate the chatbot pipeline\/flow.<\/p>\n<h2 class=\"wp-block-heading\">Step 3. User asks questions and AI chatbot responds<\/h2>\n<p>The Streamlit app provides a chat interface for users to ask questions about their code. The user interacts with the Streamlit app\u2019s chat interface, and user inputs are stored and used to query the LLM or the QA\/Agent models. Based on the following factors, the app chooses how to answer the user:<\/p>\n<p><strong>Codebase processed:<\/strong><\/p>\n<p>Yes: The QA RAG chain is used if the user has selected <strong>Detailed mode<\/strong> in the sidebar. This mode leverages the processed codebase for in-depth answers.<\/p>\n<p>Yes: A custom agent logic (using the get_agent function) is used if the user has selected <strong>Agent mode<\/strong>. This mode might provide more concise answers compared to the QA RAG model.<\/p>\n<p><strong>Codebase not processed:<\/strong><\/p>\n<p>The LLM chain is used directly if the user has not processed the codebase yet.<\/p>\n<h2 class=\"wp-block-heading\">Getting started<\/h2>\n<p>To get started with Code Explorer, check the following:<\/p>\n<p>Ensure that you have installed the <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\/\" target=\"_blank\" rel=\"noopener\">latest version of Docker Desktop<\/a>.<\/p>\n<p>Ensure that you have <a href=\"https:\/\/ollama.com\/\" target=\"_blank\" rel=\"noopener\">Ollama running locally.<\/a><\/p>\n<p>Then, complete the four steps explained below.<\/p>\n<h3 class=\"wp-block-heading\">1. Clone the repository<\/h3>\n<p>Open a terminal window and run the following command to clone the sample application.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nhttps:\/\/github.com\/dockersamples\/CodeExplorer\n<\/div>\n<p>You should now have the following files in your CodeExplorer directory:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ntree<br \/>\n.<br \/>\n\u251c\u2500\u2500 LICENSE<br \/>\n\u251c\u2500\u2500 README.md<br \/>\n\u251c\u2500\u2500 agent.py<br \/>\n\u251c\u2500\u2500 bot.Dockerfile<br \/>\n\u251c\u2500\u2500 bot.py<br \/>\n\u251c\u2500\u2500 chains.py<br \/>\n\u251c\u2500\u2500 db.py<br \/>\n\u251c\u2500\u2500 docker-compose.yml<br \/>\n\u251c\u2500\u2500 images<br \/>\n\u2502   \u251c\u2500\u2500 app.png<br \/>\n\u2502   \u2514\u2500\u2500 diagram.png<br \/>\n\u251c\u2500\u2500 pull_model.Dockerfile<br \/>\n\u251c\u2500\u2500 requirements.txt<br \/>\n\u2514\u2500\u2500 utils.py\n<p>2 directories, 13 files\n<\/p><\/div>\n<h3 class=\"wp-block-heading\">2. Create environment variables<\/h3>\n<p>Before running the GenAI stack services, open the .env and modify the following variables according to your needs. This file stores environment variables that influence your application\u2019s behavior.<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\nOPENAI_API_KEY=sk-XXXXX<br \/>\nLLM=codellama:7b-instruct<br \/>\nOLLAMA_BASE_URL=http:\/\/host.docker.internal:11434<br \/>\nNEO4J_URI=neo4j:\/\/database:7687<br \/>\nNEO4J_USERNAME=neo4j<br \/>\nNEO4J_PASSWORD=XXXX<br \/>\nEMBEDDING_MODEL=ollama<br \/>\nLANGCHAIN_ENDPOINT=&#8221;https:\/\/api.smith.langchain.com&#8221;<br \/>\nLANGCHAIN_TRACING_V2=true # false<br \/>\nLANGCHAIN_PROJECT=default<br \/>\nLANGCHAIN_API_KEY=ls__cbaXXXXXXXX06dd\n<\/div>\n<p><strong>Note:<\/strong><\/p>\n<p>If using EMBEDDING_MODEL=sentence_transformer, uncomment code in requirements.txt and chains.py. It was commented out to reduce code size.<\/p>\n<p>Make sure to set the OLLAMA_BASE_URL=http:\/\/llm:11434 in the .env file when using the Ollama Docker container. If you\u2019re running on Mac, set OLLAMA_BASE_URL=<a href=\"http:\/\/host.docker.internal:11434\/\" target=\"_blank\" rel=\"noopener\">http:\/\/host.docker.internal:11434<\/a> instead.<\/p>\n<h3 class=\"wp-block-heading\">3. Build and run Docker GenAI services<\/h3>\n<p>Run the following command to build and bring up Docker Compose services:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\ndocker compose &#8211;profile linux up &#8211;build\n<\/div>\n<p>This gets the following output:<\/p>\n<div class=\"wp-block-syntaxhighlighter-code \">\n+] Running 5\/5<br \/>\n \u2714 Network codeexplorer_net             Created                                              0.0s<br \/>\n \u2714 Container codeexplorer-database-1    Created                                              0.1s<br \/>\n \u2714 Container codeexplorer-llm-1         Created                                              0.1s<br \/>\n \u2714 Container codeexplorer-pull-model-1  Created                                              0.1s<br \/>\n \u2714 Container codeexplorer-bot-1         Created                                              0.1s<br \/>\nAttaching to bot-1, database-1, llm-1, pull-model-1<br \/>\nllm-1         | Couldn&#8217;t find &#8216;\/root\/.ollama\/id_ed25519&#8217;. Generating new private key.<br \/>\nllm-1         | Your new public key is:<br \/>\nllm-1         |<br \/>\nllm-1         | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGEM2BIxSSje6NFssxK7J1+X+46n+cWTQufEQjMUzLGC<br \/>\nllm-1         |<br \/>\nllm-1         | 2024\/05\/23 15:05:47 routes.go:1008: INFO server config env=&#8221;map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http:\/\/localhost https:\/\/localhost http:\/\/localhost:* https:\/\/localhost:* http:\/\/127.0.0.1 https:\/\/127.0.0.1 http:\/\/127.0.0.1:* https:\/\/127.0.0.1:* http:\/\/0.0.0.0 https:\/\/0.0.0.0 http:\/\/0.0.0.0:* https:\/\/0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]&#8221;<br \/>\nllm-1         | time=2024-05-23T15:05:47.265Z level=INFO source=images.go:704 msg=&#8221;total blobs: 0&#8243;<br \/>\nllm-1         | time=2024-05-23T15:05:47.265Z level=INFO source=images.go:711 msg=&#8221;total unused blobs removed: 0&#8243;<br \/>\nllm-1         | time=2024-05-23T15:05:47.265Z level=INFO source=routes.go:1054 msg=&#8221;Listening on [::]:11434 (version 0.1.38)&#8221;<br \/>\nllm-1         | time=2024-05-23T15:05:47.266Z level=INFO source=payload.go:30 msg=&#8221;extracting embedded files&#8221; dir=\/tmp\/ollama2106292006\/runners<br \/>\npull-model-1  | pulling ollama model codellama:7b-instruct using http:\/\/host.docker.internal:11434<br \/>\ndatabase-1    | Installing Plugin &#8216;apoc&#8217; from \/var\/lib\/neo4j\/labs\/apoc-*-core.jar to \/var\/lib\/neo4j\/plugins\/apoc.jar<br \/>\ndatabase-1    | Applying default values for plugin apoc to neo4j.conf<br \/>\npulling manifest<br \/>\npull-model-1  | pulling 3a43f93b78ec&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f 3.8 GB<br \/>\npulling manifest<br \/>\npulling manifest<br \/>\npull-model-1  | pulling 3a43f93b78ec&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f 3.8 GB<br \/>\npull-model-1  | pulling 8c17c2ebb0ea&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f 7.0 KB<br \/>\npull-model-1  | pulling 590d74a5569b&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f 4.8 KB<br \/>\npull-model-1  | pulling 2e0493f67d0c&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f   59 B<br \/>\npull-model-1  | pulling 7f6a57943a88&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f  120 B<br \/>\npull-model-1  | pulling 316526ac7323&#8230; 100% \u2595\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f  529 B<br \/>\npull-model-1  | verifying sha256 digest<br \/>\npull-model-1  | writing manifest<br \/>\npull-model-1  | removing any unused layers<br \/>\npull-model-1  | success<br \/>\nllm-1         | time=2024-05-23T15:05:52.802Z level=INFO source=payload.go:44 msg=&#8221;Dynamic LLM libraries [cpu cuda_v11]&#8221;<br \/>\nllm-1         | time=2024-05-23T15:05:52.806Z level=INFO source=types.go:71 msg=&#8221;inference compute&#8221; id=0 library=cpu compute=&#8221;&#8221; driver=0.0 name=&#8221;&#8221; total=&#8221;7.7 GiB&#8221; available=&#8221;2.5 GiB&#8221;<br \/>\npull-model-1 exited with code 0<br \/>\ndatabase-1    | 2024-05-23 15:05:53.411+0000 INFO  Starting&#8230;<br \/>\ndatabase-1    | 2024-05-23 15:05:53.933+0000 INFO  This instance is ServerId{ddce4389} (ddce4389-d9fd-4d98-9116-affa229ad5c5)<br \/>\ndatabase-1    | 2024-05-23 15:05:54.431+0000 INFO  ======== Neo4j 5.11.0 ========<br \/>\ndatabase-1    | 2024-05-23 15:05:58.048+0000 INFO  Bolt enabled on 0.0.0.0:7687.<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.Server &#8211; jetty-10.0.15; built: 2023-04-11T17:25:14.480Z; git: 68017dbd00236bb7e187330d7585a059610f661d; jvm 17.0.8.1+1<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.handler.ContextHandler &#8211; Started o.e.j.s.h.MovedContextHandler@7c007713{\/,null,AVAILABLE}<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.session.DefaultSessionIdManager &#8211; Session workerName=node0<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.handler.ContextHandler &#8211; Started o.e.j.s.ServletContextHandler@5bd5ace9{\/db,null,AVAILABLE}<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor &#8211; NO JSP Support for \/browser, did not find org.eclipse.jetty.jsp.JettyJspServlet<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.handler.ContextHandler &#8211; Started o.e.j.w.WebAppContext@38f183e9{\/browser,jar:file:\/var\/lib\/neo4j\/lib\/neo4j-browser-5.11.0.jar!\/browser,AVAILABLE}<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.handler.ContextHandler &#8211; Started o.e.j.s.ServletContextHandler@769580de{\/,null,AVAILABLE}<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.AbstractConnector &#8211; Started http@6bd87866{HTTP\/1.1, (http\/1.1)}{0.0.0.0:7474}<br \/>\ndatabase-1    | [main] INFO org.eclipse.jetty.server.Server &#8211; Started Server@60171a27{STARTING}[10.0.15,sto=0] @5997ms<br \/>\ndatabase-1    | 2024-05-23 15:05:58.619+0000 INFO  Remote interface available at http:\/\/localhost:7474\/<br \/>\ndatabase-1    | 2024-05-23 15:05:58.621+0000 INFO  id: F2936F8E5116E0229C97F43AD52142685F388BE889D34E000D35E074D612BE37<br \/>\ndatabase-1    | 2024-05-23 15:05:58.621+0000 INFO  name: system<br \/>\ndatabase-1    | 2024-05-23 15:05:58.621+0000 INFO  creationDate: 2024-05-23T12:47:52.888Z<br \/>\ndatabase-1    | 2024-05-23 15:05:58.622+0000 INFO  Started.\n<\/div>\n<p>The logs indicate that the application has successfully started all its components, including the LLM, Neo4j database, and the main application container. You should now be able to interact with the application through the user interface.<\/p>\n<p>You can view the services via the Docker Desktop dashboard (Figure 3).<\/p>\n<p><a href=\"https:\/\/www.docker.com\/wp-content\/uploads\/2024\/06\/F4-Code-Explorer-running.png\" target=\"_blank\" rel=\"noopener\"><\/a><strong>Figure 3: <\/strong>The Docker Desktop dashboard showing the running Code Explorer powered with GenAI stack.<\/p>\n<p>The Code Explorer stack consists of the following services:<\/p>\n<h4 class=\"wp-block-heading\">Bot<\/h4>\n<p>The bot service is the core application.\u00a0<\/p>\n<p>Built with <a href=\"https:\/\/streamlit.io\/\" target=\"_blank\" rel=\"noopener\">Streamlit<\/a>, it provides the user interface through a web browser. The build section uses a Dockerfile named bot.Dockerfile to build a custom image, containing your Streamlit application code.\u00a0<\/p>\n<p>This service exposes port 8501, which makes the bot UI accessible through a web browser.<\/p>\n<h4 class=\"wp-block-heading\">Pull model<\/h4>\n<p>This service downloads the <a href=\"https:\/\/ai.meta.com\/blog\/code-llama-large-language-model-coding\/\" target=\"_blank\" rel=\"noopener\">codellama:7b-instruct<\/a> model.\u00a0<\/p>\n<p>The model is based on the <a href=\"https:\/\/ai.meta.com\/llama\/\" target=\"_blank\" rel=\"noopener\">Llama2<\/a> model, which achieves similar performance to OpenAI\u2019s LLM but is trained with additional code context.\u00a0<\/p>\n<p>However, codellama:7b-instruct is additionally trained on code-related contexts and fine-tuned to understand and respond in human language.\u00a0<\/p>\n<p>This specialization makes it particularly adept at handling questions about code.<\/p>\n<p><strong>Note: <\/strong>You may notice that pull-model-1 service exits with code 0, which indicates successful execution. This service is designed just to download the LLM model (codellama:7b-instruct). Once the download is complete, there\u2019s no further need for this service to remain running. Exiting with code 0 signifies that the service finished its task successfully (downloading the model).<\/p>\n<h4 class=\"wp-block-heading\">Database<\/h4>\n<p>This service manages a <a href=\"https:\/\/neo4j.com\/\" target=\"_blank\" rel=\"noopener\">Neo4j graph database<\/a>.<\/p>\n<p>It efficiently stores and retrieves vector embeddings, which represent the code files in a mathematical format suitable for analysis by the LLM model.<\/p>\n<p>The Neo4j vector database can be explored at <a href=\"http:\/\/localhost:7474\/\" target=\"_blank\" rel=\"noopener\">http:\/\/localhost:7474<\/a> (Figure 4).<\/p>\n<p><a href=\"https:\/\/www.docker.com\/wp-content\/uploads\/2024\/06\/F5-Code-Explorer-database.png\" target=\"_blank\" rel=\"noopener\"><\/a><strong>Figure 4:<\/strong> Neo4j database information.<\/p>\n<h4 class=\"wp-block-heading\">LLM<\/h4>\n<p>This service acts as the LLM host, utilizing the <a href=\"https:\/\/ollama.ai\/\" target=\"_blank\" rel=\"noopener\">Ollama framework<\/a>.\u00a0<\/p>\n<p>It manages the downloaded LLM model (not the embedding), making it accessible for use by the bot application.<\/p>\n<h3 class=\"wp-block-heading\">4. Access the application<\/h3>\n<p>You can now view your Streamlit app in your browser by accessing <a href=\"http:\/\/localhost:8501\/\" target=\"_blank\" rel=\"noopener\">http:\/\/localhost:8501<\/a> (Figure 5).<\/p>\n<p><a href=\"https:\/\/www.docker.com\/wp-content\/uploads\/2024\/06\/F6-Process-files.png\" target=\"_blank\" rel=\"noopener\"><\/a><strong>Figure 5: <\/strong>View the app.<\/p>\n<p>In the sidebar, enter the path to your code folder and select <strong>Process files<\/strong> (Figure 6). Then, you can start asking questions about your code in the main chat.<\/p>\n<p><a href=\"https:\/\/www.docker.com\/wp-content\/uploads\/2024\/06\/F7-Running-process.png\" target=\"_blank\" rel=\"noopener\"><\/a><strong>Figure 6: <\/strong>The app is running.<\/p>\n<p>You will find a toggle switch in the sidebar. By default <strong>Detailed mode<\/strong> is enabled. Under this mode, the QA RAG chain chain is used (detailedMode=true) . This mode leverages the processed codebase for in-depth answers.\u00a0<\/p>\n<p>When you toggle the switch to another mode (detailedMode=false), the Agent chain gets selected. This is similar to how one AI discusses with another AI to create the final answer. In testing, the agent appears to summarize rather than a technical response as opposed to the QA agent only.<\/p>\n<p>Here\u2019s a result when detailedMode=true (Figure 7):<\/p>\n<p><strong>Figure 7: <\/strong>Result when detailedMode=true.<\/p>\n<p>Figure 8 shows a result when detailedMode=false:<\/p>\n<p><strong>Figure 8: <\/strong>Result when detailedMode=false.<\/p>\n<h2 class=\"wp-block-heading\">Start exploring<\/h2>\n<p>Code Explorer, powered by the <a href=\"https:\/\/www.docker.com\/blog\/introducing-a-new-genai-stack\/\" target=\"_blank\" rel=\"noopener\">GenAI Stack<\/a>, offers a compelling solution for developers seeking AI assistance with coding. This chatbot leverages RAG to delve into your codebase, providing insightful answers to your specific questions. Docker containers ensure smooth operation, while Langchain orchestrates the workflow. Neo4j stores code representations for efficient analysis.\u00a0<\/p>\n<p>Explore <a href=\"https:\/\/github.com\/dockersamples\/codeExplorer\" target=\"_blank\" rel=\"noopener\">Code Explorer<\/a> and the GenAI Stack to unlock the potential of AI in your development journey!<\/p>\n<h2 class=\"wp-block-heading\">Learn more<\/h2>\n<p>Subscribe to the <a href=\"https:\/\/www.docker.com\/newsletter-subscription\/\" target=\"_blank\" rel=\"noopener\">Docker Newsletter<\/a>.<\/p>\n<p>Get the latest release of <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\/\" target=\"_blank\" rel=\"noopener\">Docker Desktop<\/a>.<\/p>\n<p>Vote on what\u2019s next! Check out our <a href=\"https:\/\/github.com\/docker\/roadmap\" target=\"_blank\" rel=\"noopener\">public roadmap<\/a>.<\/p>\n<p>Have questions? The <a href=\"https:\/\/www.docker.com\/community\/\" target=\"_blank\" rel=\"noopener\">Docker community is here to help<\/a>.<\/p>\n<p>New to Docker? <a href=\"https:\/\/docs.docker.com\/desktop\/\" target=\"_blank\" rel=\"noopener\">Get started<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>The topic of GenAI is everywhere now, but even with so much interest, many developers are still trying to understand [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-947","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/947","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=947"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/947\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=947"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=947"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=947"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}