{"id":1355,"date":"2024-10-21T04:47:31","date_gmt":"2024-10-21T04:47:31","guid":{"rendered":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2024\/10\/21\/announcing-ibm-granite-ai-models-now-available-on-docker-hub\/"},"modified":"2024-10-21T04:47:31","modified_gmt":"2024-10-21T04:47:31","slug":"announcing-ibm-granite-ai-models-now-available-on-docker-hub","status":"publish","type":"post","link":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/2024\/10\/21\/announcing-ibm-granite-ai-models-now-available-on-docker-hub\/","title":{"rendered":"Announcing IBM Granite AI Models Now Available on Docker Hub"},"content":{"rendered":"<p>We are thrilled to announce that <a href=\"https:\/\/www.ibm.com\/granite\" target=\"_blank\" rel=\"noopener\">Granite models<\/a>, IBM\u2019s family of open source and proprietary models built for business, as well as Red Hat InstructLab model alignment tools, are now available on <a href=\"https:\/\/hub.docker.com\/\" target=\"_blank\" rel=\"noopener\">Docker Hub<\/a>.\u00a0<\/p>\n<p>Now, developer teams can easily access, deploy, and scale applications using IBM\u2019s AI models specifically designed for developers.<\/p>\n<p>This news will be officially announced during the AI track of the keynote at <a href=\"https:\/\/www.ibm.com\/community\/ibm-techxchange-conference\/\" target=\"_blank\" rel=\"noopener\">IBM TechXchange<\/a> on October 22. Attendees will get an exclusive look at how IBM\u2019s Granite models on Docker Hub accelerate AI-driven application development across multiple programming languages.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Why Granite on Docker Hub?<\/strong><\/h2>\n<p>With a principled approach to data transparency, model alignment, and security, IBM\u2019s open source Granite models represent a significant leap forward in natural language processing. The models are available under an <a href=\"https:\/\/www.apache.org\/licenses\/LICENSE-2.0\" target=\"_blank\" rel=\"noopener\">Apache 2.0 license<\/a>, empowering developer teams to bring generative AI into mission-critical applications and workflows.\u00a0<\/p>\n<p>Granite models deliver superior performance in coding and targeted language tasks at lower latencies, all while requiring a fraction of the compute resources and reducing the cost of inference. This efficiency allows developers to experiment, build, and scale generative AI applications both on-premises and in the cloud, all within departmental budgetary limits.<\/p>\n<p>Here\u2019s what this means for you:<\/p>\n<p><strong>Simplified deployment<\/strong>: Pull the Granite image from Docker Hub and get up and running in minutes.<\/p>\n<p><strong>Scalability<\/strong>: Docker offers a lightweight and efficient method for scaling artificial intelligence and machine learning (AI\/ML) applications. It allows you to run multiple containers on a single machine or distribute them across different machines in a cluster, enabling horizontal scalability.<\/p>\n<p><strong>Flexibility<\/strong>: Customize and extend the model to suit your specific needs without worrying about underlying infrastructure.<\/p>\n<p><strong>Portability: <\/strong>By creating Docker images once and deploying them anywhere, you eliminate compatibility problems and reduce the need for configurations.\u00a0<\/p>\n<p><strong>Community support<\/strong>: Leverage the vast Docker and IBM communities for support, extensions, and collaborations.<\/p>\n<p>In addition to the IBM Granite models, Red Hat also made the InstructLab model alignment tools available on Docker Hub. Developers using InstructLab can adapt pre-trained LLMs using far less real-world data and computing resources than alternative methodologies. InstructLab is model-agnostic and can be used to fine-tune any LLM of your choice by providing additional skills and knowledge.<\/p>\n<p>With IBM Granite AI models and InstructLab available on Docker Hub, Docker and IBM enable easy integration into existing environments and workflows.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Getting started with Granite<\/strong><\/h2>\n<p>You can find the following images available on Docker Hub:<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/redhat\/instructlab\" target=\"_blank\" rel=\"noopener\"><strong>InstructLab<\/strong><\/a>: Ideal for desktop or Mac users looking to explore InstructLab, this image provides a simple introduction to the platform without requiring specialized hardware. It\u2019s perfect for prototyping and testing before scaling up.<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/redhat\/instructlab-cuda\" target=\"_blank\" rel=\"noopener\"><strong>InstructLab with CUDA support<\/strong><\/a>: Designed for running full training workflows on GPU-equipped <a href=\"https:\/\/developers.redhat.com\/topics\/linux\/\" target=\"_blank\" rel=\"noopener\">Linux<\/a> servers, this image accelerates the synthetic data generation and training process by leveraging NVIDIA GPUs.<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/redhat\/granite-7b-lab-gguf\" target=\"_blank\" rel=\"noopener\"><strong>Granite-7b-lab<\/strong><\/a>: This image is optimized for model serving and inference on desktop or Mac environments, using the Granite-7B model. It allows for efficient and scalable inference tasks without needing a GPU, perfect for smaller-scale deployments or local testing.<\/p>\n<p><a href=\"https:\/\/hub.docker.com\/r\/redhat\/granite-7b-lab-gguf-cuda\/tags\" target=\"_blank\" rel=\"noopener\"><strong>Granite-7b-lab with CUDA support<\/strong><\/a>: For those with GPU-equipped Linux servers, this image supports faster model inference and serving through CUDA acceleration. This is ideal for high-performance AI applications where response times and throughput are critical.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How to pull and run IBM Granite images from Docker Hub\u00a0<\/strong><\/h3>\n<p>IBM Granite provides a toolset for building and managing cloud-native applications. Follow these steps to pull and run an IBM Granite image using Docker and the CLI.\u00a0You can follow similar steps for the Red Hat InstructLab images.<\/p>\n<h4 class=\"wp-block-heading\">Authenticate to Docker Hub<\/h4>\n<p>Enter your Docker username and password when prompted.<\/p>\n<h4 class=\"wp-block-heading\">Pull the IBM Granite Image<\/h4>\n<p>Pull the IBM Granite image from Docker Hub. There are two versions of the image:\u00a0<\/p>\n<p>redhat\/granite-7b-lab-gguf: For Mac\/desktop users with no GPU support<\/p>\n<p>redhat\/granite-7b-lab-gguf-cuda: For Linux NVIDIA\u00ae CUDA\u00ae support<\/p>\n<h4 class=\"wp-block-heading\">Run the Image in a Container<\/h4>\n<p>Start a container with the IBM Granite image. The container can be started in two modes: CLI (default) and server.<\/p>\n<p>To start the container in CLI mode, run the following:<br \/>docker run &#8211;ipc=host -it redhat\/granite-7b-lab-gguf\u00a0<\/p>\n<p>This command opens an interactive bash session within the container, allowing you to use the tools.<\/p>\n<p><button class=\"lightbox-trigger\"><\/button><\/p>\n<p>\t\t<\/p>\n<p>To run the container in server mode, run the following command:<\/p>\n<p>docker run &#8211;ipc=host -it redhat\/granite-7b-lab-gguf -s<\/p>\n<p>You can check <a href=\"https:\/\/www.ibm.com\/granite\/docs\/\" target=\"_blank\" rel=\"noopener\">IBM Granite\u2019s documentation<\/a> for details on using IBM Granite Models.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Join us at IBM TechXchange<\/strong><\/h2>\n<p>Granite on Docker Hub will be officially announced at the <a href=\"https:\/\/www.ibm.com\/community\/ibm-techxchange-conference\/\" target=\"_blank\" rel=\"noopener\">IBM TechXchange Conference<\/a>, which will be held October 21-24 in Las Vegas. Our head of technical alliances, Eli Aleyner, will show a live demonstration at the AI track of the keynote during IBM TechXchange. Oleg \u0160elajev, Docker\u2019s staff developer evangelist, will show how app developers can test their GenAI apps with local models. Additionally, you\u2019ll learn how Docker\u2019s collaboration with Red Hat is improving developer productivity.<\/p>\n<p>The availability of Granite on Docker Hub marks a significant milestone in making advanced AI models accessible to all. We\u2019re excited to see how developer teams will harness the power of Granite to innovate and solve complex challenges.<\/p>\n<p>Stay anchored for more updates, and as always, happy coding!<\/p>\n<h2 class=\"wp-block-heading\"><strong>Learn more<\/strong><\/h2>\n<p>Read the\u00a0<a href=\"https:\/\/www.docker.com\/blog\/tag\/genai-docker-labs\/\" target=\"_blank\" rel=\"noopener\">Docker Labs GenAI series<\/a>.<\/p>\n<p>Subscribe to the\u00a0<a href=\"https:\/\/www.docker.com\/newsletter-subscription\/\" target=\"_blank\" rel=\"noopener\">Docker Newsletter<\/a>.\u00a0<\/p>\n<p><a href=\"https:\/\/www.redhat.com\/en\/topics\/ai\/what-is-instructlab\" target=\"_blank\" rel=\"noopener\">What is InstructLab?<\/a><\/p>\n<p><a href=\"https:\/\/www.redhat.com\/en\/topics\/ai\/what-are-granite-models\" target=\"_blank\" rel=\"noopener\">What are Granite Models?<\/a><\/p>\n<p><a href=\"https:\/\/reg.tools.ibm.com\/flow\/ibm\/techxchange24\/sessioncatalog\/page\/sessioncatalog?search=aleyner&amp;tab.sessioncatalogtabs=option_1601178495160\" target=\"_blank\" rel=\"noopener\">Accelerating AI Development with IBM Granite AI Models and Docker <\/a>\u2014 IBM TechXchange session with Eli Aleyner.<\/p>\n<p><a href=\"https:\/\/reg.tools.ibm.com\/flow\/ibm\/techxchange24\/sessioncatalog\/page\/sessioncatalog?search=selajev&amp;tab.sessioncatalogtabs=option_1601178495160\" target=\"_blank\" rel=\"noopener\">Developer productivity for apps with AI<\/a> \u2013 IBM TechXchange session with Oleg \u0160elajev.<\/p>","protected":false},"excerpt":{"rendered":"<p>We are thrilled to announce that Granite models, IBM\u2019s family of open source and proprietary models built for business, as [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[4],"tags":[],"class_list":["post-1355","post","type-post","status-publish","format-standard","hentry","category-docker"],"_links":{"self":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/1355","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/comments?post=1355"}],"version-history":[{"count":0,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/posts\/1355\/revisions"}],"wp:attachment":[{"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/media?parent=1355"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/categories?post=1355"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rssfeedtelegrambot.bnaya.co.il\/index.php\/wp-json\/wp\/v2\/tags?post=1355"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}