According to the latest Gartner reports, generative AI adoption in organizations could increase productivity by 24.69%. Developing generative AI applications have become easier due to the technological advancements and LangChain is one of the best frameworks to build a Generative AI application.

Why LangChain for Generative AI Application?

LangChain offers several advantages that make it attractive for building generative AI applications:

  • Build Generative AI with Your Own Data: Unlike other frameworks, LangChain lets you create powerful applications using your company’s specific data.
  • Improving Customer Engagement: It can develop smart chatbots for customer support, offering instant answers to customer questions and automating routine tasks.
  • Avoid Common Pitfalls: While LangChain is user-friendly and it simplifies development, some pitfalls can still occur

LangChain is known for its ease of use, there are certain mistakes you could make. Let us look at the 5 rules you need to follow in a LangChain based generative AI application.

1) Implement Chain Validation

It is important for you to validate the chain created; you must not execute it before validation. If you do so, it will lead to runtime errors.

langchain-gen-ai

2) Use Environment File to Store Keys

LLMs like OpenAI have their own API keys based on which you can perform API calls (users are charged for each call). These API keys cannot be used directly in the code and thus environment file is storing these keys so that end-user cannot access them directly.

Let us see how to implement that below.

  1. Inside .env file
    OPENAI_API_KEY="YOUR_API_KEY"
  2. Usage
    api_key = os.getenv(OPENAI_API_KEY)

3) Automate Memory Management

The memory used for conversation needs to stay optimized, for example: the context can be derived from the last 3 questions too rather than the entire conversation.

While the memory can be cleaned manually too, it would be best if you set it to auto clean by fixing the number of interactions you want it to remember.

Let us see below on how you implement auto memory management fixing the interactions for context at 3.

memory = ConversationBufferMemory(max_length=3)

4) Proper Prompt Engineering

Providing context helps the model understand the query, while clear and specific prompts reduce uncertainty. It’s an iterative process: start with a basic prompt, evaluate, and refine based on results.

Tools like LangChain’s “PromptTemplate”, “PromptSelector”, and “LLMChain” aid in managing prompts.

langchain-gen-ai

Let us see an example workflow of LangChain & Python:

  1. From LangChain, import “PromptTemplate”, “LLMChain”
  2. Define the prompt template:
     template = PromptTemplate( 
         input_variables=["context", "query"], 
         template="{context}\n{query}" 
     )
  3. Create the prompt:
     context = "You are an expert in Flutter development." 
     query = "Explain how state management works in Flutter using the Provider package." 
     prompt = template.format(context=context, query=query)
  4. Initialize the LLMChain with the prompt:
    chain = LLMChain(prompt=prompt)
  5. Generate the response:
    response = chain.run() 
     print(response)

5) Log and Monitor Resource Usage

There are chances of errors or unexpected crashes, so it will be better to log every action in the system. There is a logging mechanism added in LangChain which can log every query.

In case of multiple queries, logging becomes crucial. For example:

Take queries = [“What is the capital of India?”, “Tell me a joke.”, “What is the weather today?”]

Here for query in queries use:

  1. Here for query in queries use:
    result = chain.invoke({"question": query})
  2. For logging resource utilization after each query, use:
    log_resource_utilization()

2 Additional Best Practices for LangChain Generative AI Application

1) LangSmith

Use LangSmith  to measure your generative AI application’s performance. This works with any of the generative AI applications, not just the ones made using LangChain framework.

Follow following 5 steps to get started with LangSmith (for Python):

  1. Install LangSmith 
    pip install -U langsmith
  2. Create an API key
  3. Set up your environment
    export LANGCHAIN_TRACING_V2=true 
    
    export LANGCHAIN_API_KEY= 
    
    # The below examples use the OpenAI API, though it's not necessary in general 
    
    export OPENAI_API_KEY=
  4. Run the evaluation with a built-in accuracy evaluator
    from langsmith import Client 
    
    from langsmith.evaluation import evaluate   
    
    client = Client() 
    
    # Define dataset: these are your test cases 
    
    dataset_name = "Sample Dataset" 
    
    dataset = client.create_dataset(dataset_name, description="A sample dataset in LangSmith.") 
    
    client.create_examples( 
    
        inputs=[ 
    
            {"postfix": "to LangSmith"}, 
    
            {"postfix": "to Evaluations in LangSmith"}, 
    
        ], 
    
        outputs=[ 
    
            {"output": "Welcome to LangSmith"}, 
    
            {"output": "Welcome to Evaluations in LangSmith"}, 
    
        ], 
    
        dataset_id=dataset.id, 
    
    ) 
    
    # Define your evaluator 
    
    def exact_match(run, example): 
    
        return {"score": run.outputs["output"] == example.outputs["output"]} 
    
    experiment_results = evaluate( 
    
        lambda input: "Welcome " + input['postfix'], # Your AI system goes here 
    
        data=dataset_name, # The data to predict and grade over 
    
        evaluators=[exact_match], # The evaluators to score the results 
    
        experiment_prefix="sample-experiment", # The name of the experiment 
    
        metadata={ 
    
          "version": "1.0.0", 
    
          "revision_id": "beta" 
    
        }, 
    
    )

2) LangServe

Use LangServe to create an API directly which can be utilized by frontend, it is an inbuilt feature in LangChain.

Setup LangServe using 5 simple steps:

  1. Create new app using langchain cli command
    langchain app new my-app
  2. Define the runnable in add_routes. Go to server.py and edit
    add_routes(app. NotImplemented)
  3. Use poetry to add third-party packages (e.g., langchain-openai, langchain-anthropic, langchain-mistral etc).
    poetry add [package-name] // e.g `poetry add langchain-openai
  4. Set up relevant env variables. For eg.:
    export OPENAI_API_KEY="sk-..."
  5. Serve your application
    poetry run langchain serve --port=8100

Conclusion

Developing a generative AI application with LangChain framework is an easy task but as you see there are certain rules which you need to follow to avoid pitfalls and utilize LangChain to its full potential.

Apart from these technical rules, you must prioritize data privacy, transparency, ethical content generation, and continuous improvement. This way, developers can create an application that not only delivers valuable results but also respects user privacy, promotes fairness, trust and adapts to evolving needs and standards.

Want to know more about LangChain Generative AI applications? Contact our AI/ML experts today.

Drive Success with Our Tech Expertise

Unlock the potential of your business with our range of tech solutions. From RPA to data analytics and AI/ML services, we offer tailored expertise to drive success. Explore innovation, optimize efficiency, and shape the future of your business. Connect with us today and take the first step towards transformative growth.

Let's talk business!