Explorer
NAVEEN-PORTFOLIO
components
Sidebar.tsx
Terminal.tsx
package.json
README.md

🤖 Edvoy Genie — AI Assistant & Chat Platform

🧩 Problem

Student support teams were overwhelmed with repetitive queries about courses and admission status, leading to slow response times.

🏗 Architecture

  • RAG Pipeline: Python/FastAPI service using OpenAI LLMs and MongoDB Vector Search.
  • Real-time Messaging: PubNub + Server-Sent Events (SSE) for low-latency chat.
  • Tool Calling: AI agent interacts with internal APIs to fetch real-time student data.

📊 Impact

  • 40% Automation: Automatically resolved initial student inquiries.
  • 8% Engagement Boost: Faster responses led to higher student interaction.
  • 60% Accuracy Improvement: Tool calling enabled precise answers about application status.

Tech Stack

Python • FastAPI • OpenAI • MongoDB Vector Search • Redis Streams • PubNub • Docker

💻 Code Snippet

# RAG Pipeline with Vector Search
async def generate_response(query: str, context: List[str]):
    # Retrieve relevant documents
    docs = await vector_db.similarity_search(query, k=3)
    
    # Construct prompt with context
    prompt = f"""
    Context: {docs}
    Question: {query}
    Answer based on the context provided.
    """
    
    # Generate response using LLM
    response = await openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    
    return response.choices[0].message.content
main
0
00
naveenjothi040@gmail.com|linkedin.com/in/naveen-jothi
Requests/min: 120|Latency: 28ms|Uptime: 99.98%
Prettier
TypeScript React