Skip to content

Docker, Neo4j, LangChain, and Ollama release GenAI stack for devs

Docker introduced a brand new GenAI Stack in partnership with Neo4j, LangChain, and Ollama all through its annual DockerCon developer convention keynote. This GenAI Stack is designed to lend a hand builders temporarily and simply construct generative AI programs with out on the lookout for and configuring more than a few applied sciences.

It comprises pre-configured parts like huge language fashions (LLMs) from Ollama, vector and graph databases from Neo4j, and the LangChain framework. Docker additionally presented its first AI-powered product, Docker AI.

The GenAI Stack addresses common use circumstances for generative AI and is to be had within the Docker Studying Heart and on GitHub. It gives pre-configured open-source LLMs, the help of Ollama for putting in place LLMs, Neo4j because the default database for stepped forward AI/ML type efficiency, wisdom graphs to toughen GenAI predictions, LangChain orchestration for context-aware reasoning programs, and more than a few supporting equipment and sources. This initiative targets to empower builders to leverage AI/ML features of their programs successfully and securely.

“Builders are curious about the probabilities of GenAI, however the price of trade, choice of distributors, and huge variation in era stacks makes it difficult to understand the place and get started,” stated Scott Johnston, CEO of Docker CEO Scott Johnston. “These days’s announcement removes this catch 22 situation via enabling builders to get began temporarily and safely the use of the Docker equipment, content material, and services and products they already know and love in conjunction with spouse applied sciences at the chopping fringe of GenAI app building.”

Builders are supplied with simple setup choices that provide more than a few features, together with easy knowledge loading and vector index advent. This permits builders to import knowledge, create vector indices, upload questions and solutions, and retailer them inside the vector index. 

This setup allows enhanced querying, end result enrichment, and the advent of versatile wisdom graphs. Builders can generate numerous responses in numerous codecs, corresponding to bulleted lists, chain of idea, GitHub problems, PDFs, poems, and extra. Moreover, builders can examine effects accomplished between other configurations, together with LLMs on their very own, LLMs with vectors, and LLMs with vector and data graph integration.

Ready to get a best solution for your business?