According to the latest Gartner reports, generative AI adoption in organizations could increase productivity by 24.69%. Developing generative AI applications have become easier due to the technological advancements and LangChain is one of the best frameworks to build a Generative AI application.
LangChain offers several advantages that make it attractive for building generative AI applications:
LangChain is known for its ease of use, there are certain mistakes you could make. Let us look at the 5 rules you need to follow in a LangChain based generative AI application.
It is important for you to validate the chain created; you must not execute it before validation. If you do so, it will lead to runtime errors.
LLMs like OpenAI have their own API keys based on which you can perform API calls (users are charged for each call). These API keys cannot be used directly in the code and thus environment file is storing these keys so that end-user cannot access them directly.
Let us see how to implement that below.
OPENAI_API_KEY="YOUR_API_KEY"
api_key = os.getenv(OPENAI_API_KEY)
The memory used for conversation needs to stay optimized, for example: the context can be derived from the last 3 questions too rather than the entire conversation.
While the memory can be cleaned manually too, it would be best if you set it to auto clean by fixing the number of interactions you want it to remember.
Let us see below on how you implement auto memory management fixing the interactions for context at 3.
memory = ConversationBufferMemory(max_length=3)
Providing context helps the model understand the query, while clear and specific prompts reduce uncertainty. It’s an iterative process: start with a basic prompt, evaluate, and refine based on results.
Tools like LangChain’s “PromptTemplate”, “PromptSelector”, and “LLMChain” aid in managing prompts.
Let us see an example workflow of LangChain & Python:
From LangChain, import “PromptTemplate”, “LLMChain”
template = PromptTemplate( input_variables=["context", "query"], template="{context}\n{query}" )
context = "You are an expert in Flutter development." query = "Explain how state management works in Flutter using the Provider package." prompt = template.format(context=context, query=query)
chain = LLMChain(prompt=prompt)
response = chain.run() print(response)
There are chances of errors or unexpected crashes, so it will be better to log every action in the system. There is a logging mechanism added in LangChain which can log every query.
In case of multiple queries, logging becomes crucial. For example:
Take queries = [“What is the capital of India?”, “Tell me a joke.”, “What is the weather today?”]
Here for query in queries use:
result = chain.invoke({"question": query})
log_resource_utilization()
Use LangSmith to measure your generative AI application’s performance. This works with any of the generative AI applications, not just the ones made using LangChain framework.
Follow following 5 steps to get started with LangSmith (for Python):
pip install -U langsmith
export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY= # The below examples use the OpenAI API, though it's not necessary in general export OPENAI_API_KEY=
from langsmith import Client from langsmith.evaluation import evaluate client = Client() # Define dataset: these are your test cases dataset_name = "Sample Dataset" dataset = client.create_dataset(dataset_name, description="A sample dataset in LangSmith.") client.create_examples( inputs=[ {"postfix": "to LangSmith"}, {"postfix": "to Evaluations in LangSmith"}, ], outputs=[ {"output": "Welcome to LangSmith"}, {"output": "Welcome to Evaluations in LangSmith"}, ], dataset_id=dataset.id, ) # Define your evaluator def exact_match(run, example): return {"score": run.outputs["output"] == example.outputs["output"]} experiment_results = evaluate( lambda input: "Welcome " + input['postfix'], # Your AI system goes here data=dataset_name, # The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix="sample-experiment", # The name of the experiment metadata={ "version": "1.0.0", "revision_id": "beta" }, )
Use LangServe to create an API directly which can be utilized by frontend, it is an inbuilt feature in LangChain.
Setup LangServe using 5 simple steps:
langchain app new my-app
add_routes(app. NotImplemented)
poetry add [package-name] // e.g `poetry add langchain-openai
export OPENAI_API_KEY="sk-..."
poetry run langchain serve --port=8100
Developing a generative AI application with LangChain framework is an easy task but as you see there are certain rules which you need to follow to avoid pitfalls and utilize LangChain to its full potential.
Apart from these technical rules, you must prioritize data privacy, transparency, ethical content generation, and continuous improvement. This way, developers can create an application that not only delivers valuable results but also respects user privacy, promotes fairness, trust and adapts to evolving needs and standards.
Want to know more about LangChain Generative AI applications? Contact our AI/ML experts today.
Unlock the potential of your business with our range of tech solutions. From RPA to data analytics and AI/ML services, we offer tailored expertise to drive success. Explore innovation, optimize efficiency, and shape the future of your business. Connect with us today and take the first step towards transformative growth.
In this episode of the The Lazy CEO Podcast,…
Join us for an enlightening episode of The CEO…
Creating multi-agent workflows is the future of AI development,…
How has sunflower lab's focus on integrating ai, data…
Businesses are quickly shifting towards optimized processes. And the…
Developers often make mistakes when using Power Automate, which…